

# Amazon Bedrock in SageMaker Unified Studio
<a name="bedrock"></a>

With Amazon Bedrock in SageMaker Unified Studio you can build generative AI apps that use Amazon Bedrock models and features, such as knowledge bases and guardrails, without needing to write any code.

To use Amazon Bedrock in SageMaker Unified Studio, you must be a member of an Amazon SageMaker Unified Studio domain. Your organization will provide you with login details. Contact your administrator if you don't have your login details.

Your organization's administrator determines which Amazon Bedrock models and features you have access to. Contact your organization's administrator if you need access to a model or feature that you don't currently have access to.

**Note**  
If you are administrator and need information about managing Amazon Bedrock in SageMaker Unified Studio, see [Amazon Bedrock in SageMaker Unified Studio](https://docs.aws.amazon.com/sagemaker-unified-studio/latest/adminguide/amazon-bedrock-ide.html) in the *Amazon SageMaker Unified Studio admin guide*. 

## Discover Amazon Bedrock in SageMaker Unified Studio playgrounds
<a name="getting-started-explore"></a>

Amazon Bedrock in SageMaker Unified Studio provides various options for discovering and experimenting with Amazon Bedrock models and apps.

With the model catalog you can find information about the Amazon Bedrock models that are available to you and decide which model is suitable for your use case. Different models have different capabilities and modalities. For more information, see [Find serverless models with the Amazon Bedrock model catalog](model-catalog.md).

Amazon Bedrock in SageMaker Unified Studio offers two playgrounds for you to experiment with Amazon Bedrock models in: the [chat](bedrock-explore-chat-playground.md) playground and the [image and video](explore-image-playground.md) playground. With the chat playground, you can generate text responses from a model by sending text and image prompts. You can also interact with chat agent apps that have been shared with you. With the image and video playground, you can generate and edit images and videos by sending text and image prompts to a suitable model. For more information, see [Experiment with the Amazon Bedrock playgrounds](bedrock-playgrounds.md).

## Build generative AI apps
<a name="getting-started-build"></a>

Within an Amazon SageMaker Unified Studio project, you can create two types of generative AI apps: a [chat agent app](create-chat-app.md) and a [flow app](create-flows-app.md). You can use a chat agent app to chat with an Amazon Bedrock model through a conversational interface, typically by sending prompts (text or image) and receiving responses. You can use a flows ap to link prompts, supported Amazon Bedrock models, and other units of work, such as a knowledge base, together and create generative AI workflows.

Apps that you create with Amazon Bedrock in SageMaker Unified Studio can integrate the following Amazon Bedrock features. 
+ **[Data sources](data-sources.md)** — Enrich apps by including context that is received from querying a knowledge base or a document. 
+ **[Guardrails](guardrails.md)** — Implement safeguards for your Amazon Bedrock in SageMaker Unified Studio app based on your use cases and responsible AI policies. 
+ **[Functions](functions.md)** — Call a function with a model to access a specific capability when handling a prompt. 
+ **[Prompts](prompt-mgmt.md)** — Access reusable prompts that you can use in a flow app.

Within a project, you can use the *asset gallery* to organize the prompts and components that you use for an app. A component is an Amazon Bedrock knowledge base, guardrail, or function.

A critical part of creating a generative AI app is deciding which model to use and which model settings to use. To help you decide, you can [evaluate](evaluation.md) a model for different task types.

If you work on a team, you can collaborate by [sharing](app-share.md) an app with other team members. You can also [export](app-export.md) an app so that you can use the app in your own environment.

You can clone the repository that holds your Amazon SageMaker Unified Studio project files to your computer. However, we don't recommend making changes to your project's files on your local desktop, as this may break your project's apps and components.

# Find serverless models with the Amazon Bedrock model catalog
<a name="model-catalog"></a>

The Amazon Bedrock in SageMaker Unified Studio model catalog is where you can find the serverless Amazon Bedrock foundation models that you have access to. You can group models by their modality or by their provider. The modality of a model represents the type of input data that the model is trained on and is able to process, such as text or image data. For more information, see [Supported foundation models in Amazon Bedrock](https://docs.aws.amazon.com/bedrock/latest/userguide/models-supported.html). 

If you can't find a specific model, ask your administrator if you have permissions to access the model.

To find out information about a model, such as supported use cases, select the model tile in the model catalog. When choosing a model, consider the following:
+ Amazon Bedrock models support differing inference parameters and capabilities. For more information, see [Amazon Bedrock foundation model information](https://docs.aws.amazon.com/bedrock/latest/userguide/foundation-models-reference.html) in the *Amazon Bedrock user guide*.
+ Amazon Bedrock in SageMaker Unified Studio supports Amazon Bedrock foundation models with on-demand throughput and [cross-region inference](https://docs.aws.amazon.com/bedrock/latest/userguide/cross-region-inference.html).

  Models that support cross-region inference throughput can increase throughput and improve resiliency by sending requests to different AWS Regions during peak utilization bursts. In the model catalog (and the model selector in app configuration), the text *Cross-region* identifies such a model.
+ Amazon Bedrock in SageMaker Unified Studio doesn't support [Provisioned throughput](https://docs.aws.amazon.com/bedrock/latest/userguide/throughput.html), [custom models](https://docs.aws.amazon.com/bedrock/latest/userguide/custom-models.html) or [imported models](https://docs.aws.amazon.com/bedrock/latest/userguide/model-customization-import-model.html).

If the model is suitable for your needs, you can choose the menu button to start using the model. Depending on the model, you can choose from the following actions:
+ [Build chat agent app](create-chat-app.md) – Create an app in which users can chat with a model.
+ [Build flow app](create-flows-app.md) – Visually create the workflow for an app.
+ [Build prompt](prompt-mgmt.md) – Create resuable prompts for use in a flow app.
+ [Evaluate model](evaluation.md) – Evaluate the performance of a model for your use case.

The following procedure shows how to open the model catalog from the Amazon Bedrock in SageMaker Unified Studio playground. You can also access the model catalog from your projects. Your administrator might give you access to different models in your projects. To check the models that you can access in a project, open or create a project, and then select **Models** in the navigation pane to open the model catalog.



**To open the model catalog in the playground**

1. Navigate to the Amazon SageMaker Unified Studio landing page by using the URL from your administrator.

1. Access Amazon SageMaker Unified Studio using your IAM or single sign-on (SSO) credentials. For more information, see [Access Amazon SageMaker Unified Studio](getting-started-access-the-portal.md).

1. At the top of the page, choose **Discover**.

1. Under **Data and model catalog**, choose **Amazon Bedrock models**. The Amazon Bedrock in SageMaker Unified Studio playground opens at the model catalog.

1. (Optional) Choose **Group by: Modality** and select **Provider** to sort the list by model provider.

1. Choose a model to get information about the model.

1. If you're ready to build with the app, choose **Action** and select the appropriate action. You can also choose an action from the model tile on the model catalog page.

1. Choose **Amazon Bedrock model catalog** to go back to the model catalog page.

# What is a prompt?
<a name="explore-prompts"></a>

A prompt is the input that you send to a model in order for it to generate a response. For example, you could send the following user prompt to a model: **What is Avebury stone circle?**.

This prompt would likely generate a response similar to the following:

```
Avebury stone circle is a Neolithic monument located in Wiltshire, England. 
It consists of a massive circular bank and ditch, with a large outer circle of standing stones
that originally numbered around 100.
```

Some models support *multimodal* prompts, which are prompts that support different types of media input, such as text, images, or video. For example, you could send an image to a model and ask it to describe what the image contains. Not all models support multimodal prompts and modality support varies by model. For information on how to best create prompts for a specific model, see [Prompt engineering guides](#prompt-guides).

In addition to user prompts, Amazon Bedrock in SageMaker Unified Studio also supports inference parameters and system instructions, which allow you to customize and influence model behavior. The following sections provide information and guidance on how to use inference parameters and sytem prompts.

**Topics**
+ [Inference parameters](#inference-parameters)
+ [System instructions](#system-prompts)
+ [Prompt engineering guides](#prompt-guides)

## Inference parameters
<a name="inference-parameters"></a>

Inference parameters are values that you can adjust to influence how a model generates a response to a prompt. For example, in the chat agent app you create in [Build a chat agent app with Amazon Bedrock](create-chat-app.md), you can use inference parameters to adjust the randomness and diversity of the songs that the model generates for a playlist. 

You can apply inference parameters to models you use in the [Amazon Bedrock playgrounds](bedrock-playgrounds.md), [chat agent apps](create-chat-app.md), and [flow apps](create-flows-app.md).

### Randomness and diversity
<a name="inference-randomness"></a>

For any given sequence, a model determines a probability distribution of options for the next token in the sequence. To generate each token in an output, the model samples from this distribution. Randomness and diversity refer to the amount of variation in a model's response. You can control these factors by limiting or adjusting the distribution. Foundation models typically support the following parameters to control randomness and diversity in the response.
+ **Temperature**– Affects the shape of the probability distribution for the predicted output and influences the likelihood of the model selecting lower-probability outputs.
  + Choose a lower value to influence the model to select higher-probability outputs.
  + Choose a higher value to influence the model to select lower-probability outputs.

  In technical terms, the temperature modulates the probability mass function for the next token. A lower temperature steepens the function and leads to more deterministic responses, and a higher temperature flattens the function and leads to more random responses.
+ **Top K** – The number of most-likely candidates that the model considers for the next token.
  + Choose a lower value to decrease the size of the pool and limit the options to more likely outputs.
  + Choose a higher value to increase the size of the pool and allow the model to consider less likely outputs.

  For example, if you choose a value of 50 for Top K, the model selects from 50 of the most probable tokens that could be next in the sequence.
+ **Top P** – The percentage of most-likely candidates that the model considers for the next token.
  + Choose a lower value to decrease the size of the pool and limit the options to more likely outputs.
  + Choose a higher value to increase the size of the pool and allow the model to consider less likely outputs.

  In technical terms, the model computes the cumulative probability distribution for the set of responses and considers only the top P% of the distribution.

  For example, if you choose a value of 0.8 for Top P, the model selects from the top 80% of the probability distribution of tokens that could be next in the sequence.

The following table summarizes the effects of these parameters.


****  

| Parameter | Effect of lower value | Effect of higher value | 
| --- | --- | --- | 
| Temperature | Increase likelihood of higher-probability tokens Decrease likelihood of lower-probability tokens | Increase likelihood of lower-probability tokensDecrease likelihood of higher-probability tokens | 
| Top K | Remove lower-probability tokens | Allow lower-probability tokens | 
| Top P | Remove lower-probability tokens | Allow lower-probability tokens | 

As an example to understand these parameters, consider the example prompt **I hear the hoof beats of "**. Let's say that the model determines the following three words to be candidates for the next token. The model also assigns a probability for each word.

```
{
    "horses": 0.7,
    "zebras": 0.2,
    "unicorns": 0.1
}
```
+ If you set a high **temperature**, the probability distribution is flattened and the probabilities become less different, which would increase the probability of choosing "unicorns" and decrease the probability of choosing "horses".
+ If you set **Top K** as 2, the model only considers the top 2 most likely candidates: "horses" and "zebras."
+ If you set **Top P** as 0.7, the model only considers "horses" because it is the only candidate that lies in the top 70% of the probability distribution. If you set **Top P** as 0.9, the model considers "horses" and "zebras" as they lie in the top 90% of probability distribution.

## System instructions
<a name="system-prompts"></a>

A system instruction an overarching initial guideline that defines how a model should behave in future interactions. System instructions provide context to the model about the task it should perform or the persona it should adopt during the conversation.

For example, you could use a system instruction to specify that the model should behave as an app that creates playlists for a radio station that plays rock and pop music. You can then use the model to create playlists of rock and pop songs based on different themes, such as songs that are related by artist

You can apply system instructions to models you use in the [Amazon Bedrock playgrounds](bedrock-playgrounds.md), [chat agent apps](create-chat-app.md), and [flow apps](create-flows-app.md).

## Prompt engineering guides
<a name="prompt-guides"></a>

Amazon Bedrock in SageMaker Unified Studio provides models from a variety of model providers. Each provider provides guidance on how to best create prompt for their models. 
+ **Amazon Nova user guide:** [https://docs.aws.amazon.com/nova/latest/userguide/what-is-nova.html](https://docs.aws.amazon.com/nova/latest/userguide/what-is-nova.html) 
+ **Anthropic Claude model prompt guide:** [https://docs.anthropic.com/claude/docs](https://docs.anthropic.com/claude/docs/configuring-gpt-prompts-for-claude) 
+ **Anthropic Claude prompt engineering resources:** [https://docs.anthropic.com/claude/docs/guide-to-anthropics-prompt-engineering-resources](https://docs.anthropic.com/claude/docs/configuring-gpt-prompts-for-claude) 
+ **Cohere prompt guide:** [https://txt.cohere.com/how-to-train-your-pet-llm-prompt-engineering](https://txt.cohere.com/how-to-train-your-pet-llm-prompt-engineering) 
+  **AI21 Labs Jurassic model prompt guide:** [https://docs.ai21.com/docs/prompt-engineering](https://docs.ai21.com/docs/prompt-engineering) 
+  **Meta Llama 2 prompt guide:** [ https://ai.meta.com/llama/get-started/\$1prompting ](https://ai.meta.com/llama/get-started/#prompting) 
+  **Stability documentation:** [https://platform.stability.ai/docs/getting-started](https://platform.stability.ai/docs/getting-started) 
+  **Mistral AI prompt guide:** [https://docs.mistral.ai/guides/prompting\$1capabilities/](https://docs.mistral.ai/guides/prompting_capabilities/) 

For general guidelines about creating prompts with Amazon Bedrock, see [General guidelines for Amazon Bedrock LLM users](https://docs.aws.amazon.com/bedrock/latest/userguide/general-guidelines-for-bedrock-users.html).

# Experiment with the Amazon Bedrock playgrounds
<a name="bedrock-playgrounds"></a>

An Amazon Bedrock in SageMaker Unified Studio playground lets you experiment with Amazon Bedrock foundation models, so that you can choose the right model for your use case. You can also experiment with chat agent apps that others share with you. Amazon Bedrock in SageMaker Unified Studio provides the following playgrounds:
+ **Chat playground** – Chat with an Amazon Bedrock model by sending prompts to the model and answering the response that the model generates. You can also experiment with chat agent apps that are shared with you.
+ **Image and video playground** – Generate images and videos with a model. You can use prompts, images, and videos to describe the content you want to generate.

Each playground lets you choose a model and experiment with settings, such as the [inference parameters](explore-prompts.md#inference-parameters) that affect the output that the model generates. To help you experiment, you can compare the output of multiple models and chat agent apps. 

If you want to know more about a model, use the model catalog to find information such as supported use cases and model attributes. For more information, see [Find serverless models with the Amazon Bedrock model catalog](model-catalog.md).

**Warning**  
Generative AI may give inaccurate responses. Avoid sharing sensitive information. Chats may be visible to others in your organization.

Other Amazon Bedrock in SageMaker Unified Studio users can share apps and prompts so that you can experiment with them in a playground. For more information, see [Access shared generative AI assets in an Amazon Bedrock playground](bedrock-playground-shared-assets.md).

After you familiarize yourself with a model in a playground, you can try creating your own Amazon Bedrock in SageMaker Unified Studio app, such as a [chat agent app](create-chat-app.md). 

**Topics**
+ [Chat with a model in the Amazon Bedrock chat playground](bedrock-explore-chat-playground.md)
+ [Chat with an app in the Amazon Bedrock chat playground](bedrock-explore-chat-playground-app.md)
+ [Generate an image with the Amazon Bedrock image and video playground](explore-image-playground.md)
+ [Generate a video with the Amazon Bedrock image and video playground](bedrock-explore-video-playground.md)
+ [Access shared generative AI assets in an Amazon Bedrock playground](bedrock-playground-shared-assets.md)

# Chat with a model in the Amazon Bedrock chat playground
<a name="bedrock-explore-chat-playground"></a>

The Amazon Bedrock in SageMaker Unified Studio chat playground allows you chat with an Amazon Bedrock model and try chat agent apps that are [shared](bedrock-explore-chat-playground-app.md) to you. A chat provides a back-and-forth, dialogue-like interaction between you and an Amazon Bedrock model. The model is able to retain context during a chat allowing for coherent and relevant responses from the model. You chat with a model by sending a prompt to the model and by receiving the response that the model generates. You continue the chat by sending further prompts.

If a model supports multimodal prompts, you can send prompts that contain text and images. A chat can contain multiple text and image prompts. After you finish a chat, you can reset the playground to begin a new chat. Amazon Bedrock models support differing modalities. For more information, see [Supported foundation models in Amazon Bedrock](https://docs.aws.amazon.com/bedrock/latest/userguide/models-supported.html). 

The maximum image file size is 5MB. You can upload images that are in JPG, PNG, GIF, and WebP format. 

When you run a prompt in the chat playground, you get the following information about the request: 
+ **Input tokens** — The number of input tokens used by the foundation model during inference.
+ **Output tokens** — The number of tokens generated in a response by the foundation model.
+ **Latency** — The amount of time the foundation model uses to generate each token in a sequence, based on the [on-demand](https://docs.aws.amazon.com/bedrock/latest/userguide/throughput.html) throughput.

The [chat playground](#bedrock-explore-chat-playground) provides quick start prompts that illustrate the kinds of prompt that you can send to a model.

Optionally, you can compare the outputs from up to 3 shared apps and models. You can make configuration changes for models, such as [inference parameters](explore-prompts.md#inference-parameters) and [system instructions](explore-prompts.md#system-prompts) and compare the results. You can't make configuration changes for shared apps. 

**To chat with a model**

1. Navigate to the Amazon SageMaker Unified Studio landing page by using the URL from your administrator.

1. Access Amazon SageMaker Unified Studio using your IAM or single sign-on (SSO) credentials. For more information, see [Access Amazon SageMaker Unified Studio](getting-started-access-the-portal.md).

1. At the top of the page, choose the **Discover**.

1. In the **Generative AI** section, choose **Chat playground** to open the chat playground.  
![\[Open Amazon Bedrock in SageMaker Unified Studio chat playground.\]](http://docs.aws.amazon.com/sagemaker-unified-studio/latest/userguide/images/bedrock/bedrock-ide-discover.png)

1. In **Type** select **Model** and then select a model to use in **Model**. For full information about the model, choose **View full model details** in the information panel. For more information, see [Find serverless models with the Amazon Bedrock model catalog](model-catalog.md). If you don't have access to an appropriate model, contact your administrator. Different models might not support all features.

1. In the **Enter prompt** text box, enter **What is Avebury stone circle?**.

1. (Optional) If the model you chose is a reasoning model, you can choose **Reason** to have the model include its reasoning in the reponse. For more information, see [Enhance model responses with model reasoning](https://docs.aws.amazon.com/bedrock/latest/userguide/inference-reasoning.html) in the *Amazon Bedrock user guide*.

1. Press Enter on your keyboard, or choose the run button, to send the prompt to the model. Amazon Bedrock in SageMaker Unified Studio shows the response from the model in the playground.  
![\[Run prompt in Amazon Bedrock in SageMaker Unified Studio chat playground.\]](http://docs.aws.amazon.com/sagemaker-unified-studio/latest/userguide/images/bedrock/bedrock-ide-chat-playground-run-prompt.png)

1. Continue the chat by entering the prompt **Is there a museum there?** and pressing Enter. 

   The response shows how the model uses the previous prompt as context for generating its next response.

1. Choose **Reset** to start a new chat with the model.

1. Influence the model response by doing the following:

   1. Enter and run a prompt. Note the response from the model.

   1. Choose the configurations menu to open the **Configurations** pane.  
![\[Inference parameters in Amazon Bedrock in SageMaker Unified Studio chat playground.\]](http://docs.aws.amazon.com/sagemaker-unified-studio/latest/userguide/images/bedrock/bedrock-ide-chat-playground-inference.png)

   1. Influence the model response by making [inference parameters](explore-prompts.md#inference-parameters) changes.

   1. (Optional) In **System instructions**, enter any overarching system instructions that you want the model to apply for future interactions.

   1. Run the prompt again and compare the response with the previous response. 

1. Choose **Reset** to start a new chat with the model.

1. Try sending an image to a model by doing the following:

   1. For **Model**, choose a model that supports [images](https://docs.aws.amazon.com/bedrock/latest/userguide/models-supported.html).

   1. Choose the attachment button at the left of the **Enter prompt** text box.   
![\[Run prompt in Amazon Bedrock in SageMaker Unified Studio chat playground.\]](http://docs.aws.amazon.com/sagemaker-unified-studio/latest/userguide/images/bedrock/bedrock-ide-chat-playground-run-prompt-attach.png)

   1. In the open file dialog box, choose an image from your local computer.

   1. In the text box, next to the image that you uploaded, enter **What's in this image?**. 

   1. Press Enter on your keyboard enter to send the prompt to the model. The response from the models describes the model or image.

1. (Optional) Try using another model and different prompts. Different models have different recommendations for creating, or engineering, prompts. For more information, see [Prompt engineering guides](explore-prompts.md#prompt-guides).

1. (Optional) Compare the output from multiple models, or [shared apps](bedrock-explore-chat-playground-app.md).

   1. In the playground, turn on **Compare mode**.

   1. In both panes, select the model that you want to compare. If you want to use a shared app, select **App** in **Type** and then select the app in **App**.

   1. Enter a prompt in the text box and run the prompt. The output from each model is shown. You can choose the copy icon to copy the prompt or model response to the clipboard.

   1. (Optional) Choose **View configs** to make configuration changes, such as [inference parameters](explore-prompts.md#inference-parameters). Choose **View chats** to return to the chat page.

   1. (Optional) Choose **Add chat window** to add a third window. You can compare up to 3 models or apps.

   1. Turn off **Compare mode** to stop comparing models.

Now that you are familiar with the explorer playground, try creating a Amazon Bedrock in SageMaker Unified Studio app next. For more information, see [Build a chat agent app with Amazon Bedrock](create-chat-app.md). 

# Chat with an app in the Amazon Bedrock chat playground
<a name="bedrock-explore-chat-playground-app"></a>

You can use the chat playground to experiment with chat agent apps that are shared to you. When you open a shared app, you can send prompts to the app and see the response. You can't make changes to the shared app.

Optionally, you can compare the outputs from to 3 shared apps and [models](bedrock-explore-chat-playground.md). You can view the configuration for a shared app, but you can't make configuration changes.

To learn how to share apps that you create, see [Share an Amazon Bedrock chat agent app](app-share.md).

**To chat with a shared app**

1. Navigate to the Amazon SageMaker Unified Studio landing page by using the URL from your administrator.

1. Access Amazon SageMaker Unified Studio using your IAM or single sign-on (SSO) credentials. For more information, see [Access Amazon SageMaker Unified Studio](getting-started-access-the-portal.md).

1. At the top of the page, choose the **Discover**.

1. In the **Generative AI** section, choose **Chat playground** to open the chat playground.  
![\[Open Amazon Bedrock in SageMaker Unified Studio chat playground.\]](http://docs.aws.amazon.com/sagemaker-unified-studio/latest/userguide/images/bedrock/bedrock-ide-discover.png)

1. In **Type** select **App** and then select an app to use in **App**.  
![\[Open Amazon Bedrock in SageMaker Unified Studio chat playground.\]](http://docs.aws.amazon.com/sagemaker-unified-studio/latest/userguide/images/bedrock/bedrock-ide-chat-playground-app.png)

1. In the **Enter prompt** text box at the bottom of the page, enter the prompt that you want to use. If the app builder changes the default text for the text box, the text is different.

1. Press Enter on your keyboard enter to send the prompt to the mode.

1. (Optional) Compare the output from multiple apps, or models.

   1. In the playground, turn on **Compare mode**.

   1. In both panes, select the app that you want to compare. 

   1. Enter a prompt in the text box and run the prompt.

   1. (Optional) Choose **View configs** to view the app configurations, such as [inference parameters](explore-prompts.md#inference-parameters). Choose **View chats** to return to the chat page.

   1. (Optional) Choose **Add chat window** to add a third window. You can compare up to 3 models or apps.

   1. Turn off **Compare mode** to stop comparing models.

# Generate an image with the Amazon Bedrock image and video playground
<a name="explore-image-playground"></a>

The image and video playground is an interactive environment that lets you specify actions that generate and manipulate images using natural language prompts, reference images, and suitable Amazon Bedrock models. 

## Actions for generating images
<a name="bedrock-image-actions"></a>

Within the image playground, you use an *action* to specify the image generation task that you want the model to do, such as replacing the background of an existing image. The actions that are available depends on the model you use.
+ **Generate image** — [Generates a new image](bedrock-image-playground-generate-image.md) from a prompt that you enter.
+ **Generate variations** — Use a prompt to generate a [variation of an existing image](bedrock-image-playground-generate-variations.md).
+ **Remove object** — [Removes an object](bedrock-image-playground-remove-object.md) from an image you supply. 
+ **Replace background** — [Replaces the background](bedrock-image-playground-replace-background.md) of an image with a new background. 
+ **Replace object** — [Replaces an object](bedrock-image-playground-replace-object.md) in an image with a different object.
+ **Edit image sandbox** — An [image sandbox](bedrock-image-playground-image-sandbox.md) that you can use to expiriment with Stable Diffusion XL models. 

Some actions, such as generate variation, require a reference image that a model uses to generate a new image. An action might require you to use a mask tool to draw a bounding box around an area of the reference image, such as when you define an object that you want to remove with the remove object action. 

## Configuration options
<a name="bedrock-image-configuration"></a>

You can influence how a model generates an image by configuring the following options. The configuration changes you can make depends on the action you choose. 

### Negative prompt
<a name="bedrock-image-negative-prompt"></a>

A set of words or phrases that that tells the model what not to include in the image that it generates. For example, you can use the term *-lowres* to avoid generating low-resolution or blurry images.

### Reference image
<a name="bedrock-image-reference-image"></a>

In certain actions, such as generate variations or replace background, you specify a reference image that the model uses to process the action.

### Response image
<a name="bedrock-image-response-image"></a>

You can specify the image dimensions, orientation, and number of images to generate.

### Advanced configuration options
<a name="bedrock-image-advanced-configurations"></a>

You can make advanced configuration changes that how the model generates images. All models image generation models support the following: 
+ **Prompt strength** — Prompt strength is a numerical value that determines how strongly a model should adhere to the given text prompt. A higher prompt strength means the model will try to closely follow and prioritize the text description provided in the prompt when generating the image. Lower prompt strengths allow the model more creative freedom to deviate from the prompt.
+ **Seed** — A seed is numeric value that a model uses to seed a random number generator. The model uses the seed as a starting point for creating random patterns during image generation. This initial randomness influences things like the exact positioning, colors, textures, and compositions present in the image that the model generates. 
+ **Similarity strength** — If you use the *Generate variations* action with a Titan Image Generator G1 V1 or a Titan Image Generator G1 V2 model, you can also configure the *Similarity Strength* advanced configuration. Similarity Strength specifies how similar the generated image should be to the input image. Use a lower value to introduce more randomness into the generated image. 
+ **Generate step** — If you use a Stable Diffusion XL model, you can configure the *Generate step* advanced configuration. Generate step determines how many times the image is sampled. More steps can result in a more accurate result.

**Topics**
+ [Negative prompt](#bedrock-image-negative-prompt)
+ [Reference image](#bedrock-image-reference-image)
+ [Response image](#bedrock-image-response-image)
+ [Advanced configuration options](#bedrock-image-advanced-configurations)

# Generate an image
<a name="bedrock-image-playground-generate-image"></a>

The following procedure shows you how to use a model to generate an image. You can set various configurations such as the number of images to generate and how strongly the prompt affects the generation of the image. For more information, see [Configuration options](explore-image-playground.md#bedrock-image-configuration).

**To generate an image in the image playground**

1. Navigate to the Amazon SageMaker Unified Studio landing page by using the URL from your administrator.

1. Access Amazon SageMaker Unified Studio using your IAM or single sign-on (SSO) credentials. For more information, see [Access Amazon SageMaker Unified Studio](getting-started-access-the-portal.md).

1. At the top of the page, choose the **Discover**.

1. In the **GENERATIVE AI** section, choose **Image and video playground**.

1. If the **Configurations** pane isn't open, choose the configuration button.

1. For **Model** select a model to use.

1. For **Action** choose the action **Generate image**.

1. In **Response image** do the following:

   1. For **Number of images** select the number of images that you want the model to generate. Not all models support changing this value.

   1. For **Orientation**, choose the orientation (landscape or portrait) for the images that the model generates.

   1. For **Size**, select the size, in pixels, of the images that the model generates. 

1. (Optional) In **Advanced configurations**, change how the model generates images by making advanced configuration changes. For more information, see [Advanced configuration options](explore-image-playground.md#bedrock-image-advanced-configurations).

1. In the **Enter prompt** text box, enter **Create a photo of a local classic rock band playing on an outdoor stage.**. Alternatively, enter a prompt of your choosing.

1. Press Enter on your keyboard to start the action. Amazon Bedrock in SageMaker Unified Studio shows the image that the model generates in the playground.

1. (Optional) See how different configuration parameters affect image generation by repeating steps 9 - 11 with different values. 

# Generate a variation of an image
<a name="bedrock-image-playground-generate-variations"></a>

The following procedure shows you how to generate a variation of a reference image that you supply. You can set various configurations such as the number of images to generate and how strongly the prompt affects the generation of the image. For more information, see [Configuration options](explore-image-playground.md#bedrock-image-configuration).

**To generate a variation of an image**

1. Navigate to the Amazon SageMaker Unified Studio landing page by using the URL from your administrator.

1. Access Amazon SageMaker Unified Studio using your IAM or single sign-on (SSO) credentials. For more information, see [Access Amazon SageMaker Unified Studio](getting-started-access-the-portal.md).

1. At the top of the page, choose the **Discover**.

1. In the **GENERATIVE AI** section, choose **Image and video playground**.

1. If the **Configurations** pane isn't open, choose the configuration button.

1. For **Model** select a model to use.

1. For **Action** choose **Generate variations**.

1. (Optional) For **Negative prompt** enter text that describes content or concepts that you do not want the model to include in the image.

1. In **Reference image** choose **Upload image** and upload the image that you want the model to use with the action. 

1. In **Response image** do the following:

   1. For **Number of images** select the number of images that you want the model to generate. Not all models support changing this value.

   1. For **Orientation**, choose the orientation (landscape or portrait) for the images that the model generates.

   1. For **Size**, select the size, in pixels, of the images that the model generates. 

1. (Optional) In **Advanced configurations**, change how the model generates images by making advanced configuration changes. For more information, see [Advanced configuration options](explore-image-playground.md#bedrock-image-advanced-configurations).

1. In the **Enter prompt** text box, enter the prompt that describes the image that you want the model to generate.

1. Press Enter on your keyboard to start the action. Amazon Bedrock in SageMaker Unified Studio shows the image that the model generates in the playground.

# Remove an object from an image
<a name="bedrock-image-playground-remove-object"></a>

The following procedure shows you how to use a model to remove an object from an image that you supply. For example, you could remove an unwanted person from an image. You can set various configurations such as the number of images to generate and how strongly the prompt affects the generation of the image. For more information, see [Configuration options](explore-image-playground.md#bedrock-image-configuration).

**Note**  
The object removal action is only available with Titan Image Generator G1 V1 and Titan Image Generator G1 V2 models.

**To remove an object from an image**

1. Navigate to the Amazon SageMaker Unified Studio landing page by using the URL from your administrator.

1. Access Amazon SageMaker Unified Studio using your IAM or single sign-on (SSO) credentials. For more information, see [Access Amazon SageMaker Unified Studio](getting-started-access-the-portal.md).

1. At the top of the page, choose the **Discover**.

1. In the **GENERATIVE AI** section, choose **Image and video playground**.

1. If the **Configurations** pane isn't open, choose the configuration button.

1. For **Model** select a model to use.

1. For **Action** choose **Remove object**.

1. (Optional) For **Negative prompt** enter text that describes content or concepts that you don't want the model to include in the image.

1. In **Reference image** choose **Upload image** and upload the image that you want the model to use with the action. 

1. (Optional) For **Negative prompt** enter text that describes content or concepts that you do not want the model to include in the image.

1. (Optional) In **Advanced configurations**, change how the model generates images by making advanced configuration changes. For more information, see [Advanced configuration options](explore-image-playground.md#bedrock-image-advanced-configurations).

1. In the center pane, use the masking tool to draw a bounding box around the area of the image that you want the action to update. You can do the following:

   1. Resize the bounding box by selecting a corner of the bounding box with your mouse button. Then, drag the mouse to resize the bounding box. Release the mouse button to complete resizing the bounding box.

   1. Move the bounding box by selecting the interior of the bounding box with your mouse button. Move the bounding box to the new location and release the mouse button.

1. Press Enter on your keyboard to start the action. Amazon Bedrock in SageMaker Unified Studio shows the image that the model generates in the playground.

# Replace an object in an image
<a name="bedrock-image-playground-replace-object"></a>

The following procedure shows you how to use a model to replace an object in an image that you supply. For example, you could replace a piece of furniture in an image with a different piece of furniture. You can set various configurations such as the number of images to generate and how strongly the prompt affects the generation of the image. For more information, see [Configuration options](explore-image-playground.md#bedrock-image-configuration).

**Note**  
The object replacement action is only available with Titan Image Generator G1 V1 and Titan Image Generator G1 V2 models.

**To replace an object in an image**

1. Navigate to the Amazon SageMaker Unified Studio landing page by using the URL from your administrator.

1. Access Amazon SageMaker Unified Studio using your IAM or single sign-on (SSO) credentials. For more information, see [Access Amazon SageMaker Unified Studio](getting-started-access-the-portal.md).

1. At the top of the page, choose the **Discover**.

1. In the **GENERATIVE AI** section, choose **Image and video playground**.

1. If the **Configurations** pane isn't open, choose the configuration button.

1. For **Model** select a model to use.

1. For **Action** choose **Remove object**.

1. (Optional) For **Negative prompt** enter text that describes content or concepts that you do not want the model to include in the image.

1. In **Reference image** choose **Upload image** and upload the image that you want the model to use with the action. 

1. (Optional) In **Advanced configurations**, change how the model generates images by making advanced configuration changes. For more information, see [Advanced configuration options](explore-image-playground.md#bedrock-image-advanced-configurations).

1. In the center pane, use the masking tool to draw a bounding box around the area of the image that you want the action to update. You can do the following:

   1. Resize the bounding box by selecting a corner of the bounding box with your mouse button. Then, drag the mouse to resize the bounding box. Release the mouse button to complete resizing the bounding box.

   1. Move the bounding box by selecting the interior of the bounding box with your mouse button. Move the bounding box to the new location and release the mouse button.

1. Choose the run button on your keyboard to start the action. Amazon Bedrock in SageMaker Unified Studio shows the image that the model generates in the playground.

# Replace the background for an image
<a name="bedrock-image-playground-replace-background"></a>

The following procedure shows you how to use a model to replace the background for an image. For example, you could change the background for an image from a view of a forest to a view of city buildings. You can set various configurations such as the number of images to generate and how strongly the prompt affects the generation of the image. For more information, see [Configuration options](explore-image-playground.md#bedrock-image-configuration).

**Note**  
The background replacement action is only available with Titan Image Generator G1 V1 and Titan Image Generator G1 V2 models.

**To replace the background for an image**

1. Navigate to the Amazon SageMaker Unified Studio landing page by using the URL from your administrator.

1. Access Amazon SageMaker Unified Studio using your IAM or single sign-on (SSO) credentials. For more information, see [Access Amazon SageMaker Unified Studio](getting-started-access-the-portal.md).

1. At the top of the page, choose the **Discover**.

1. In the **GENERATIVE AI** section, choose **Image and video playground**.

1. If the **Configurations** pane isn't open, choose the configuration button.

1. For **Model** select a model to use.

1. For **Action** choose **Replace background**.

1. (Optional) For **Negative prompt** enter text that describes content or concepts that you don't want the model to include in the image.

1. In **Reference image** choose **Upload image** and upload the image that you want the model to use with the action. 

1. (Optional) For **Negative prompt** enter text that describes content or concepts that you do not want the model to include in the image.

1. (Optional) In **Advanced configurations**, change how the model generates images by making advanced configuration changes. For more information, see [Advanced configuration options](explore-image-playground.md#bedrock-image-advanced-configurations).

1. In the center pane, use the masking tool to draw a bounding box around the area of the image that you want the action to preserve. The model updates the area outside of the bounding box. You can do the following:

   1. Resize the bounding box by selecting a corner of the bounding box with your mouse button. Then, drag the mouse to resize the bounding box. Release the mouse button to complete resizing the bounding box.

   1. Move the bounding box by selecting the interior of the bounding box with your mouse button. Move the bounding box to the new location and release the mouse button.

1. In the **Enter prompt** text box, enter a prompt that describes the background that you want the image to have.

1. Press Enter on your keyboard to start the action. Amazon Bedrock in SageMaker Unified Studio shows the image that the model generates in the playground.

# Edit an image with the image sandbox
<a name="bedrock-image-playground-image-sandbox"></a>

If you use the image playground with a Stable Diffusion XL model, you can use the image sandbox to make changes to a model.

**Note**  
The image sandbox action is only available with Stable Diffusion XL models.

**To edit an image in the image sandbox**

1. Navigate to the Amazon SageMaker Unified Studio landing page by using the URL from your administrator.

1. Access Amazon SageMaker Unified Studio using your IAM or single sign-on (SSO) credentials. For more information, see [Access Amazon SageMaker Unified Studio](getting-started-access-the-portal.md).

1. At the top of the page, choose the **Discover**.

1. In the **GENERATIVE AI** section, choose **Image and video playground**.

1. If the **Configurations** pane isn't open, choose the configuration button.

1. For **Model** select a model to use.

1. For **Action** choose **Remove object**.

1. (Optional) For **Negative prompt** enter text that describes content or concepts that you do not want the model to include in the image.

1. In **Reference image** choose **Upload image** and upload the image that you want the model to use with the action. 

1. (Optional) In **Advanced configurations**, change how the model generates images by making advanced configuration changes. For more information, see [Advanced configuration options](explore-image-playground.md#bedrock-image-advanced-configurations).

1. In the center pane, use the masking tool to draw a bounding box around the area of the image that you want the action to update. You can do the following:

   1. Resize the bounding box by selecting a corner of the bounding box with your mouse button. Then, drag the mouse to resize the bounding box. Release the mouse button to complete resizing the bounding box.

   1. Move the bounding box by selecting the interior of the bounding box with your mouse button. Move the bounding box to the new location and release the mouse button.

1. In the **Enter prompt** text box, enter the prompt that describes the edit that that you want the model inside the bounding box. 

1. Press Enter on your keyboard to start the action. Amazon Bedrock in SageMaker Unified Studio shows the image that the model generates in the playground.

# Generate a video with the Amazon Bedrock image and video playground
<a name="bedrock-explore-video-playground"></a>

The Amazon Bedrock in SageMaker Unified Studio image and video playground is where you can generate a short video with a suitable Amazon Bedrock model. To generate a video with a model, you supply a [prompt](explore-prompts.md) that describe the video that you want to create and configuration information that influences how the model generates the video. For example, you can start with an image of a rock band and in the prompt request that you want to create an animated video of the band playing a live concert. The image and video playground also provides quick start prompts that illustrate the kinds of video that you can create.

You can download a video that you create in the image and video playground.

To create more complex videos, you can use a storyboard to connect a sequence of shots.

Your administrator sets the retention policy for videos that you upload and generate with the playground. For more information, contact your administrator.

**Topics**
+ [Configure video generation](bedrock-explore-video-playground-configuration.md)
+ [Generate a video from a prompt](bedrock-explore-video-playground-procedure.md)
+ [Plan a video with the storyboard](bedrock-explore-video-playground-storyboard.md)

# Configure video generation
<a name="bedrock-explore-video-playground-configuration"></a>

To configure video generation, you choose a model to use and set optionally settings that influence the output of the model. Currently Amazon Bedrock in SageMaker Unified Studio supports video generation with Amazon Nova models. If you don't make any configuration changes, the playground uses the default values for the model.

## Amazon Nova model settings
<a name="bedrock-explore-video-playground-configuration-nova"></a>

With Amazon Nova models, you can set the following configurations:
+ **Start image** – (Optional) A reference image that model uses as a starting point for the video. The image dimensions must be 1280x720 pixels. If you supply an image with different dimensions, the playground resizes the image to 1280x720 pixels.
+ **Seed** – (Optional) Initializes the random number generator used in the video generation process. Higher seed values don't correlate with any particular quality or characteristic in the output. Instead, use different seed values options to explore differing variations of output, either with or without the same prompt. Repeatedly using the same seed value and prompt creates the exact same video.

For more information, see the [Amazon Nova guide](https://docs.aws.amazon.com/nova/latest/userguide).

# Generate a video from a prompt
<a name="bedrock-explore-video-playground-procedure"></a>

The following instructions show you how to generate a video in the image and video playground. If you are using Amazon Nova 1.1, you can use the [storeyboard](bedrock-explore-video-playground-storyboard.md) to make more complex videos.

**To generate a video**

1. Navigate to the Amazon SageMaker Unified Studio landing page by using the URL from your administrator.

1. Access Amazon SageMaker Unified Studio using your IAM or single sign-on (SSO) credentials. For more information, see [Access Amazon SageMaker Unified Studio](getting-started-access-the-portal.md).

1. At the top of the page, choose the **Discover**.

1. In the **GENERATIVE AI** section, choose **Image and video playground**.

1. If the **Configurations** pane isn't open, choose the configuration button.

1. For **Model** select a model to use.

1. (Optional) In the **Configurations** section, set parameters to influence the output of the model. Note that additional configurations can be set in **Advanced configurations**. The parameters that are available to depend on the model you use. For more information, see [Configure video generation](bedrock-explore-video-playground-configuration.md). 

1. In the **Enter prompt** text box, enter **Create an animated video of a local classic rock band playing on an outdoor stage.**. Alternatively, enter a prompt of your choosing.

1. Press Enter on your keyboard to start generating the video. Amazon Bedrock in SageMaker Unified Studio shows the video that the model generates in the playground. 

1. Choose the play button to view the video. 

1. (Optional) Choose the download button to download the video to your computer.

# Plan a video with the storyboard
<a name="bedrock-explore-video-playground-storyboard"></a>

To create more a more complex video, you can use the storyboard to plan the video that you want to create. In the storyboard you connect a sequence of shots, which the model combines to generate the video. Each shot is a prompt and an optional start frame image. Each shot the model generates is always 6 seconds in length. You can't set a specific duration for the video, but you can affect the duration by adding or removing shots in the story board. The maximum length of video that you can generate is 120 seconds. 

**Note**  
You can only use the playground with Amazon Nova Reel 1.1.

**To generate a video with the storyboard**

1. Navigate to the Amazon SageMaker Unified Studio landing page by using the URL from your administrator.

1. Access Amazon SageMaker Unified Studio using your IAM or single sign-on (SSO) credentials. For more information, see [Access Amazon SageMaker Unified Studio](getting-started-access-the-portal.md).

1. At the top of the page, choose the **Discover**.

1. In the **GENERATIVE AI** section, choose **Image and video playground**.

1. If the **Configurations** pane isn't open, choose the configuration button.

1. For **Model** select a model to use.

1. (Optional) In the **Configurations** section, set parameters to influence the output of the model. Note that additional configurations can be set in **Advanced configurations**. The parameters that are available to depend on the model you use. For more information, see [Configure video generation](bedrock-explore-video-playground-configuration.md). 

   In the storyboard, you can't set the duration for the video. 

1. In the center pane, choose **Storyboard**.

1. Choose **Add shot** and do the following:

   1. Choose **Describe what happens in this shot...** and enter the text for the prompt. You can update the prompt later, if neccessary.

   1. (Optional) Choose **Add start frame** to upload a starting image for the video.

   1. (Optional) Choose the trash icon to remove shots than you no longer need.

1. Repeat the previous step until you have added all the shots for your video.

1. Choose the run button to start generating the video. Don't leave the page while the model generates the video.

1. When the model finishes generating the video, choose the play button to view the video.

1. (Optional) Make further edits and add shots as you need them. 

1. (Optional) Download the video by right-clicking on the video and selecting **Save video as...**. 

# Access shared generative AI assets in an Amazon Bedrock playground
<a name="bedrock-playground-shared-assets"></a>

Other Amazon Bedrock in SageMaker Unified Studio users can share [chat agent app](create-chat-app.md) and [prompts](prompt-mgmt.md) with you as *Shared generative AI assets*. You access assets from the **Shared apps and prompts** section in a playground. You can view the type of each asset and the projects that contains the assets.

In a playground, You can experiment with shared app and prompt assets, but you can't make changes to their configuration. If you want to make changes, you need to open the project that contains the asset. You can share chat agent apps and prompts that you create in a project. For more information, see [Share an Amazon Bedrock chat agent app](app-share.md) and [Share an Amazon Bedrock prompt version](sharing-a-prompt.md).

**To access and use a shared asset in a playground**

1. Navigate to the Amazon SageMaker Unified Studio landing page by using the URL from your administrator.

1. Access Amazon SageMaker Unified Studio using your IAM or single sign-on (SSO) credentials. For more information, see [Access Amazon SageMaker Unified Studio](getting-started-access-the-portal.md).

1. At the top of the page, choose the **Discover**.

1. In the **GENERATIVE AI** section, choose **Shared apps and prompts**.

1. In the playground, select the name of the asset that you want to use. The **Asset type** column tells the type of the asset (App or prompt).

1. Use the asset in the playground.

# Build a chat agent app with Amazon Bedrock
<a name="create-chat-app"></a>

An Amazon Bedrock in SageMaker Unified Studio chat agent app allows users to chat with an Amazon Bedrock model through a conversational interface, typically by sending text messages and receiving responses. The model analyzes the user's input, formulate an appropriate response, and carries on a dialogue with the user. You can use a chat agent apps for various purposes, such as providing customer service, answering questions, offering recommendations, or engaging in open-ended conversations on a wide range of topics. You can enhance a chat agent app app by integrating the following Amazon Bedrock features:
+ **[Data sources](data-sources.md)** — Enrich model responses by including context generated from an Amazon Bedrock knowledge base. 
+ **[Guardrails](guardrails.md)** — Lets you implement safeguards for your chat agent app based on your use cases and responsible AI policies. 
+ **[Functions](functions.md)** — Lets a model call a function to access a specific capability when handling a prompt. 

When you first create an chat agent app, you have a working draft of the app. Changes you make to your chat agent app apply to the working draft. You iterate on your working draft until you're satisfied with the behavior of your app. At any time you can save your chat agent app. 

Once you create and save chat agent app, you can do the following:
+ [Share the chat agent app](app-share.md) with other users.
+ [Export the chat agent app](app-export.md) for use outside of Amazon SageMaker Unified Studio.
+ Use the chat agent app as an agent node in a flow app.

To use a chat agent app in a flow app you need to [deploy](app-deploy-app.md) the app. Amazon Bedrock in SageMaker Unified Studio deploys the app for you whenever you share the app with other users.

In this section you learn how to create chat agent app that uses Amazon Bedrock in SageMaker Unified Studio components such as a [data source](data-sources.md) and a [guardrail](guardrails.md). You also learn how to share your app with other users.

**Topics**
+ [Create a chat agent app with Amazon Bedrock](create-chat-app-with-components.md)
+ [Deploy an Amazon Bedrock chat agent app](app-deploy.md)
+ [Share an Amazon Bedrock chat agent app](app-share.md)

# Create a chat agent app with Amazon Bedrock
<a name="create-chat-app-with-components"></a>

In this section, you learn how create a simple Amazon Bedrock in SageMaker Unified Studio chat agent app that creates playlists for a radio station. 

The app can generate playlists and get the dates and locations of upcoming shows. Later, you add the following Amazon Bedrock features.
+ A guardrail to prevent songs with inappropriate song titles. 
+ A knowledge base that lets the app create playlists using your unique song information.
+ A function that gets today's top 10 songs. 

**Topics**
+ [Step 1: Create the initial chat agent app](#chat-app-create-app)
+ [Step 2: Add a guardrail to your chat agent app](#chat-app-add-guardrail)
+ [Step 3: Add a knowledge base to your chat agent app](#chat-app-add-data-source)
+ [Step 4: Add a function call to your chat agent app](#chat-app-add-function-call)

## Step 1: Create the initial chat agent app
<a name="chat-app-create-app"></a>

In this step you create a chat agent app that generates playlists for a radio station. 

To create the app, you first need to create an Amazon SageMaker Unified Studio [project](projects.md). A project can contain multiple apps and is also where you can add the Amazon Bedrock components that you want your apps to use. Later you will add guardail, knowledge base, and function components to your app. You can share a project with other users and groups of users. For more information, see [Share an Amazon Bedrock chat agent app](app-share.md).

To help guide users of the app, you can set user interface (UI) text, such as hint text for the beginning of a chat.

In the app, you can experiment with the randomness and diversity of the response that the model returns by changing the [inference parameters](explore-prompts.md#inference-parameters). 

While you develop your app, you work on the current draft. You can save the current draft to the app history. Later you might want to restart work from a previous draft. For more information, see [Use app history to view and restore versions of an Amazon Bedrock app](app-history.md).

**Warning**  
Generative AI may give inaccurate responses. Avoid sharing sensitive information. Chats may be visible to others in your organization.

**To create an Amazon Bedrock chat agent app**

1. Navigate to the Amazon SageMaker Unified Studio landing page by using the URL from your administrator.

1. Access Amazon SageMaker Unified Studio using your IAM or single sign-on (SSO) credentials. For more information, see [Access Amazon SageMaker Unified Studio](getting-started-access-the-portal.md).

1. On the Amazon SageMaker Unified Studio home page, in the **Amazon Bedrock in SageMaker Unified Studio** tile, choose **Build chat agent app** to create a new chat agent app. The **Select or create a new project to continue** dialog box opens.

1. In the **Select or create a new project to continue** dialog box, do one of the following:
   + If you want to use a new project, follow the instructions at [Create a new project](create-new-project.md). For the **Project profile** in step 1, choose **Generative AI application development**.
   + If you want to use an existing project, select the project that you want to use and then choose **Continue**. 

1. In **Untitled App - nnnn**, enter **Radio show** as the name for your app. 

1. In the **Configs** pane, do the following:

   1. For **Model**, select a model that supports Guardrails, Data, and Function components. The description of the model tells you the components that a model supports. For full information about the model, choose **View full model details** in the information panel. For more information, see [Find serverless models with the Amazon Bedrock model catalog](model-catalog.md). If you don't have access to an appropriate model, contact your administrator. Different models might not support all features. 

   1. For **Enter a system instruction** in **Instructions for chat agent & examples**, enter **You are a chat agent app that creates 2 hour long playlists for a radio station that plays rock and pop music.**.

   1. In the **UI** section, update the user interface for the app by doing the following:

      1. In **Hint text for empty chat** enter **Hi\$1 I'm your radio show playlist creator.**.

      1. In **Hint text for user input** enter **Enter a prompt that describes the playlist that you want.**.

      1. In **Quick start prompts** choose **Edit**.

      1. Choose **Reset** to clear the list of quick start prompts

      1. For **Quick-start prompt 1**, enter **Create a playlist of pop music songs.**.

      1. (Optional). Enter quick start prompts of your choice in the remaining quick start prompt text boxes.

      1. Choose **Back to configs**.

1. Choose **Save** to save the current working draft of your app. 

1. In the **Quick start prompts** section of the **Preview** pane, run the quick start prompt that you just created by choosing the prompt. 

   The app shows the prompt and the response from the model in the **Preview** pane.

1. In the prompt text box (the text should read **Enter a prompt that describes the playlist that you want**), enter **Create a playlist of songs where each song on the list is related to the next song, by musician, bands, or other connections. Be sure to explain the connection from one song to the next. **.

1. Choose the run button (or press Enter on your keyboard) to send the prompt to the model.

1. (Optional) In the **Inference parameters** section change the inference parameters. For example, include less familiar songs in the playlist by increasing the **Temperature** inference parameter. 

   The inference parameters you can change are *Temperature*, *Top P*, and *Top K*. Not all models support each of these inference parameters. For more information, see [Inference parameters](explore-prompts.md#inference-parameters). 

1. (Optional) In **Stop sequences**, add one or more stop sequences for the model. A stop sequence ensures that the model stops generating text immediately after it generates text that matches the stop sequence. Stop sequences are useful for getting precise answers without unnecessary additional text. Not all models support stop sequences.

1. (Optional) In **Reasoning** select **Model reasoning** to enable model reasoning. Model reasoning is when a model uses chain of thought reasoning to take a large, complex task and break it down into smaller, simpler steps. You can only enable or disable model reasoning at the start of a new chat. You can specify the number of tokens to use, which includes both output and reasoning tokens. Not all models support model reasoning. For more information, see [Enhance model responses with model reasoning](https://docs.aws.amazon.com/bedrock/latest/userguide/inference-reasoning.html) in the *Amazon Bedrock user guide*. 

1. (Optional) Share your app with others by following the instructions at [Share an Amazon Bedrock chat agent app](app-share.md).

1. (Optional) Deploy your app for use in a flow app by following the instructions at [Deploy an Amazon Bedrock chat agent app](app-deploy.md).

1. (Optional) Export your snapshot from Amazon SageMaker Unified Studio by following the instructions at [Use your app outside of Amazon SageMaker Unified Studio](app-export.md).

1. Next step: Add a guardrail to your app by following the instructions at [Step 2: Add a guardrail to your chat agent app](#chat-app-add-guardrail).

## Step 2: Add a guardrail to your chat agent app
<a name="chat-app-add-guardrail"></a>

Guardrails for Amazon Bedrock lets you implement safeguards for your Amazon Bedrock in SageMaker Unified Studio app based on your use cases and responsible AI policies. You can create multiple guardrails tailored to different use cases and apply them across multiple foundation models, providing a consistent user experience and standardizing safety controls across generative AI apps. You can configure denied topics to disallow undesirable topics and content filters to block harmful content in the prompts you send to a model and to the responses you get from a model. You can use guardrails with text-only foundation models. For more information, see [Safeguard your Amazon Bedrock app with a guardrail](guardrails.md).

### Add a guardrail
<a name="guardrails-add-guardrail"></a>

This procedure shows you how to use a guardrail to safeguard the app you created in [Step 1: Create the initial chat agent app](#chat-app-create-app). The guardrail prevents inappropriate language in song titles and filters out unwanted music genres. 

**To add a guardrail to an Amazon Bedrock app**

1. Open the app that you created in [Step 1: Create the initial chat agent app](#chat-app-create-app).

1. In the **Configs** pane, choose **Guardrails** and then **Create new guardrail**.

1. For **Guardrail name**, enter **prevent\$1unwanted\$1songs**. 

1. For **Guardrail description**, enter **Prevents inappropriate or undesirable songs.**.

1. In **Content filters** make sure **Enable content filters** is selected. For more information, see [Content filters](guardrails.md#guardrails-studio-content-filters).

1. In **Filter for prompts** make sure the filter for each category is set to **High**.

1. Make sure **Apply the same filters for responses** is selected.

1. In **Blocked messsaging** do the following.

   1. For **Blocked messaging for prompts**, enter **Sorry, your prompt contained inappropriate text.**.

   1. Clear **Apply the same message for blocked responses**.

   1. For **Blocked messaging for responses**, enter **Sorry, but I can't respond with information that contains inappropriate text.**.

1. Choose **Create** to create the guardrail.

1. In the **Configs** pane, in the **Guardrails** section, select the guardrail that you just created (**prevent\$1unwanted\$1songs**). It might take a minute for the guardrail to appear in the list.

1. Test the guardrail by entering **Create a list of 10 songs where each song has a swear word in the title.** In the prompt edit box. 

1. Choose the run button to send the prompt to the model. The model should respond with the message **Sorry, but I can't respond with information that contains inappropriate text.**

1. Use a denied topic filter to prevent requests for music from a specific music genere. For information about denied topics, see [Denied topics](guardrails.md#guardrails-topic-policies).

   To add the filter, do the following.

   1. In the **Guardrails** section of the **Configs** pane, select the guardrail and choose **Preview**.

   1. Choose **Edit** to edit the guardrail.

   1. In **Denied topics**, choose **Add topic**.

   1. For **Name**, enter **heavy metal**. 

   1. For **Definition for topic**, enter **Avoid mentioning songs that are from the heavy metal genre of music.**.

   1. In **Sample phrases - optional**, enter **Create a playlist of heavy metal songs**.

   1. (Optional) Choose **Add phrase** to add other phrases.

   1. Choose **Save**.

   1. Om the **Edit guardrail** page, choose **Update** to update the guardrail.

   1. Test the guardrail by entering **Create a list of heavy metal songs.** in the prompt edit box.

   1. Choose the run button to send the prompt to the model. The model should respond with the message **Sorry, your prompt contained inappropriate text**.

1. Next step: Add a data source to your app by following the instructions at [Step 3: Add a knowledge base to your chat agent app](#chat-app-add-data-source).

## Step 3: Add a knowledge base to your chat agent app
<a name="chat-app-add-data-source"></a>

You can use your own data into your application by adding a knowledge base to your app. Doing this allows your app to access to information that is only available to you. When your app passes a query, Amazon Bedrock in SageMaker Unified Studio generates a response that includes the query results from the knowledge base. For more information, see [Add a Knowledge Base to your Amazon Bedrock app](data-sources.md).

In this topic, you update the app you created in [Step 1: Create the initial chat agent app](#chat-app-create-app) to use a CSV file as a document (local file) data source for a knowledge base. The CSV file includes information about bands that don't have public metadata such as song length, or music genre. The user can use the app to create a playlist based on criteria such as song length or music genre.

**To add your own data to an Amazon Bedrock app**

1. Create a CSV file name *songs.csv* and fill with the following ficticious CSV data.

   ```
   song,artist,genre,length-seconds
   "Celestial Odyssey","Starry Renegades","Cosmic Rock",240
   "Neon Rapture","Synthwave Siren","Synthwave Pop",300
   "Wordsmith Warriors","Lyrical Legions","Lyrical Flow",180
   "Nebula Shredders","Galactic Axemen","Cosmic Rock",270
   "Electro Euphoria","Neon Nomads","Synthwave Pop",210
   "Rhythm Renegades","Percussive Pioneers","Lyrical Flow",240
   "Stardust Rift","Cosmic Crusaders","Cosmic Rock",180
   "Synthwave Serenade","Electro Enchanters","Synthwave Pop",300
   "Lyrical Legends","Rhyme Royale","Lyrical Flow",240
   "Supernova Shredders","Amplified Ascension","Cosmic Rock",300
   "Celestial Chords","Ethereal Echoes","Cosmic Rock",240
   "Neon Nirvana","Synthwave Sirens","Synthwave Pop",270
   "Verbal Virtuoso","Lyrical Maestros","Lyrical Flow",210
   "Cosmic Collision","Stellar Insurgents","Cosmic Rock",180
   "Pop Paradox","Melodic Mavericks","Synthwave Pop",240
   "Flow Fusion","Verbal Virtuosos","Lyrical Flow",300
   "Shredding Shadows","Crimson Crusaders","Cosmic Rock",270
   "Synth Serenade","Electro Enchanters","Synthwave Pop",180
   "Wordsmith Warlords","Lyrical Legionnaires","Lyrical Flow",240
   "Sonic Supernova","Amplified Ascension","Cosmic Rock",210
   "Celestial Symphony","Ethereal Ensemble","Cosmic Rock",300
   "Electro Euphoria","Neon Nomads","Synthwave Pop",180
   "Lyrical Legends","Rhyme Royale","Lyrical Flow",270
   "Crimson Crescendo","Scarlet Serenaders","Cosmic Rock",240
   "Euphoric Tides","Melodic Mystics","Synthwave Pop",210
   "Rhythm Renegades","Percussive Pioneers","Lyrical Flow",180
   "Cosmic Collision","Stellar Insurgents","Cosmic Rock",300
   "Stardust Serenade","Celestial Crooners","Synthwave Pop",240
   "Wordsmith Warriors","Lyrical Legions","Lyrical Flow",270
   "Sonic Supernova III","Amplified Ascension","Cosmic Rock",180
   ```

1. Open the app that you created in [Step 1: Create the initial chat agent app](#chat-app-create-app). 

1. In **Data** choose **Use Knowledge Base** and then **Create Knowledge Base**. The **Create Knowledge Base** pane is shown. If you've previously created the knowledge base, go to step *10* and select the knowledge base.

1. For **Name**, enter a name for the Knowledge Base.

1. For **Description**, enter a description for the Knowledge Base.

1. In **Add data sources**, choose **Local file**.

1. Choose **Click to upload** and upload the CSV file that you created in step 1. Alternatively, add the CSV by dragging and dropping the document from your computer.

   For more information, see [Use a Local file as a data source](data-source-document.md).

1. For **Embeddings model**, choose a model for converting your data into vector embeddings.

1. Choose **Create**. It might take Amazon Bedrock in SageMaker Unified Studio a few minutes to create the knowledge base.

1. For **Select Knowledge Base**, select the Knowledge Base that you just created. 

1. Test the data source by entering **Create a playlist of songs in the Lyrical Flow genre** in the prompt text box.

1. Choose the run button to send the prompt to the model. The model should respond with a playlist of songs from the Lyrical Flow genre that the CSV file contains.

1. Choose **Save** to save the app.

## Step 4: Add a function call to your chat agent app
<a name="chat-app-add-function-call"></a>

Amazon Bedrock in SageMaker Unified Studio functions let a model include information that it has no previous knowledge of in its response. For example, you can use a function to include dynamic information in a model's response such as a weather forecast, sports results, or traffic conditions. To use a function in Amazon Bedrock in SageMaker Unified Studio you add a function component to your app. For more information, see [Call functions from your Amazon Bedrock chat agent app](functions.md).

In Amazon Bedrock in SageMaker Unified Studio, a function calls an API hosted outside of Amazon Bedrock in SageMaker Unified Studio. You either create the API yourself, or use an existing API. To create an API, you can use [ Amazon API Gateway](https://docs.aws.amazon.com/apigateway/). 

In this procedure, you add a function to the app that you created in [Step 1: Create the initial chat agent app](#chat-app-create-app) so that users can get a list of the top 10 songs played on the radio station that day. 

**To add a function to an Amazon Bedrock app**

1. Create a HTTPS server that implements a `TopSongsToday` function. Make sure the function adheres to the following schema.

   ```
   openapi: 3.0.0
   info:
     title: Top Songs API
     description: API to retrieve the top 10 songs played today
     version: 1.0.0
   
   paths:
     /top-songs:
       get:
         operationId: TopSongsToday
         summary: Get the top 10 songs played today
         description: >
           This endpoint returns an array of the top 10 songs played today,
           ordered by popularity. The first element in the array (index 0)
           represents the most popular song, and the last element (index 9)
           represents the 10th most popular song.
         responses:
           '200':
             description: Successful response
             content:
               application/json:
                 schema:
                   $ref: '#/components/schemas/TopSongs'
   
   components:
     schemas:
       TopSongs:
         type: array
         items:
           $ref: '#/components/schemas/Song'
         description: >
           An array containing the top 10 songs played today. The first element
           (index 0) is the most popular song, and the last element (index 9)
           is the 10th most popular song.
         example:
           - title: 'Song Title 1'
             artist: 'Artist Name 1'
             album: 'Album Name 1'
           - title: 'Song Title 2'
             artist: 'Artist Name 2'
             album: 'Album Name 2'
           # ... up to 10 songs
   
       Song:
         type: object
         properties:
           title:
             type: string
             description: The title of the song
           artist:
             type: string
             description: The name of the artist or band
           album:
             type: string
             description: The name of the album the song is from
         required:
           - title
           - artist
           - album
   ```

1. Open the app that you created in [Step 1: Create the initial chat agent app](#chat-app-create-app).

1. In **Models**, choose a model that supports functions. If you don't have access to an appropriate model, contact your administrator.

1. In **Functions**, choose **Create new function**.

1. In the **Create function** pane, do the following.

   1. For in **Function name**, enter **Top\$1ten\$1songs\$1today**.

   1. For **Function description (optional)**, enter **Today's top 10 songs.**.

   1. For **Function schema**, enter the OpenAPI schema from step one.

   1. Choose **Validate schema** to validate the schema. 

   1. In **Authentication method** choose the authentication method for your HTTP server. For more information, see [Authentication methods](functions.md#functions-authentication).

   1. In **API servers**, enter the URL for your server in **Server URL**. This value is autopopulated if the server URL is in the schema.

   1. Choose **Create** to create your function. It might take a few minutes to create the function.

1. For **Enter a system instruction**, update the system instruction so that it describes the function. Use the following text: **You are an app that creates 2 hour long playlists for a radio station that plays rock and pop music. The function Top\$1ten\$1songs\$1today gets the most popular song played on the radio station.**.

1. Test the function by doing the following.

   1. Enter **What are today's top 10 songs?** in the prompt edit box.

   1. Choose the run button to send the prompt to the model. The model should respond with the list of today's top 10 songs.

# Deploy an Amazon Bedrock chat agent app
<a name="app-deploy"></a>

Deployment of a chat agent app happens for the following reasons:
+ You want to use a chat agent app as an [agent node](flows-use-chat-agent.md) within a flow app. Before you can do so, you must first deploy the chat agent app.
+ You want to [share](app-share.md) a chat agent app with other users. When you share the app, Amazon Bedrock deploys the chat agent app for you.

During deployment Amazon Bedrock creates, or reuses, an *alias* for the chat agent app and creates a new *version* of the chat agent app. 

**Aliases and versions**

An alias is associated with a specific version of a chat agent app. When you use a chat agent app as an agent node in a flow app, you configure the agent node in the flow app with the alias of the chat agent app. When you share an app, the recipient can access the version of the app that is associated with the alias. You can create up to 8 aliases for a chat agent app.

A version is a snapshot of a chat agent app at the time you deploy the app. Amazon Bedrock creates a new version of your chat agent app each time you deploy the app. Amazon Bedrock creates versions in numerical order, starting from 1.

With aliases, you can switch efficiently between [different versions](app-change-alias-version.md) of a chat agent app, without having to update the flow apps that use the chat agent app. For example, if you discover an issue with the current version of your chat agent app, you can quickly update agent nodes that use the current version to a previous version. 

The following procedure shows you how to deploy a chat agent app. After deploying the app, you can then use the app as a [node agent](flows-use-chat-agent.md) in a flow app. You don't need to deploy a chat agent app before sharing the app, as Amazon Bedrock deploys the app for you. For more information, see [Deploy an Amazon Bedrock chat agent app](#app-deploy).

**To deploy a chat agent app**

1. Navigate to the Amazon SageMaker Unified Studio landing page by using the URL from your administrator.

1. Access Amazon SageMaker Unified Studio using your IAM or single sign-on (SSO) credentials. For more information, see [Access Amazon SageMaker Unified Studio](getting-started-access-the-portal.md).

1. If the project that you want to use isn't already open, do the following:

   1. Choose the current project at the top of the page. If a project isn't already open, choose **Select a project**.

   1. Select **Browse all projects**. 

   1. In **Projects** select the project that you want to use.

1. Choose the **Build** menu option at the top of the page.

1. In **MACHINE LEARNING & GENERATIVE AI** choose **My apps**.

1. Open the app that you want to deploy.

1. Choose **Deploy**.

1. In **App description** enter a short description for the app. Make sure that that the description lets users understand the purpose of the app.

1. Assign an alias by doing one of the following:
   + Choose **Create a new alias** to create a new alias. Then enter an name and optional description for the alias.
   + Choose **Select an existing alias** to use an existing alias. Then select the existing alias that you want to use.

1. Choose **Deploy** to deploy the app. It might take a few seconds to deploy the app. 

1. To use the deployed app, select **Working draft** in the configuration pane and then select the name of the alias that you used to deploy the app.

1. Enter a prompt to try your deployed app.

# Modify the version of an Amazon Bedrock chat agent app
<a name="app-change-alias-version"></a>

You can change the version of a chat agent app that an agent node in a flow app uses, or the version of a chat agent app that is shared with other users. To change the version, you modify the alias for the chat agent app to reference the new version. After updating the alias, you don't need to update the flow app or shares of the chat agent app for them to use the new version. 

The following procedure shows you how to change the version of a chat agent app that an alias for the app references.



**To modify the version that an alias references**

1. Navigate to the Amazon SageMaker Unified Studio landing page by using the URL from your administrator.

1. Access Amazon SageMaker Unified Studio using your IAM or single sign-on (SSO) credentials. For more information, see [Access Amazon SageMaker Unified Studio](getting-started-access-the-portal.md).

1. If the project that you want to use isn't already open, do the following:

   1. Choose the current project at the top of the page. If a project isn't already open, choose **Select a project**.

   1. Select **Browse all projects**. 

   1. In **Projects** select the project that you want to use.

1. Choose the **Build** menu option at the top of the page.

1. In **MACHINE LEARNING & GENERATIVE AI** choose **My apps**.

1. Open the app that you want to use.

1. Choose the selector on the **Deploy** button and select **View aliases**. The **View and manage aliases** pane opens.

1. For the alias that you want modify, choose **Edit**.

1. In the **Edit alias** pane, select the version that you want the alias to use in **Select version to associate with this alias**. 

1. (Optional) Update the name and description for the alias.

1. Choose **Save** to save your changes.

# Share an Amazon Bedrock chat agent app
<a name="app-share"></a>

A snapshot of an Amazon Bedrock in SageMaker Unified Studio chat agent app is a point-in-time capture of the app's state, including its code, configuration, and any associated data. 

You can share a snapshot with all members of your Amazon SageMaker Unified Studio domain, or with specific users or groups in your Amazon SageMaker Unified Studio domain. When you first share a snapshot, you get a share link to the snapshot that you can send to users. If you share the snapshot with all users, Amazon SageMaker Unified Studio grants permission to a user, when they first open the share link. Amazon SageMaker Unified Studio also adds the snapshot to the user's shared assets list. If you share the snapshot with specific users and groups, the snapshot is immediately available in their shared assets list. They can also use the share link to access the snapshot. By default, sharing a snapshot is restricted to only those users or groups that you select. 

When you share a chat agent app, Amazon SageMaker Unified Studio also publishes the chat agent app to the Amazon SageMaker AI Catalog. 

When sharing occurs, Amazon Bedrock [deploys](app-deploy.md) the snapshot of the chat agent app. When you first share a snapshot, Amazon Bedrock creates a new alias for the chat agent app and a new version of the chat agent app that represents the snapshot. On subsequent shares, Amazon Bedrock creates a new version of the app, and associates the alias with the new version. If necessary, You can change the version that is associated with an alias. For more information, see [Modify the version of an Amazon Bedrock chat agent app](app-change-alias-version.md). 

**To share a chat agent app snapshot**

1. Navigate to the Amazon SageMaker Unified Studio landing page by using the URL from your administrator.

1. Access Amazon SageMaker Unified Studio using your IAM or single sign-on (SSO) credentials. For more information, see [Access Amazon SageMaker Unified Studio](getting-started-access-the-portal.md).

1. If the project that you want to use isn't already open, do the following:

   1. Choose the current project at the top of the page. If a project isn't already open, choose **Select a project**.

   1. Select **Browse all projects**. 

   1. In **Projects** select the project that you want to use.

1. Choose the **Build** menu option at the top of the page.

1. In **MACHINE LEARNING & GENERATIVE AI** choose **My apps**.

1. Open the chat agent app that you want to share.

1. Choose **Share**.

1. In **App description** enter a short description for the chat agent app. Make sure that that the description lets users understand the purpose of the chat agent app.

1. Do one of the following:
   + If you want to share the chat agent app snapshot with all members of your Amazon SageMaker Unified Studio domain, select turn on **Grant access with link**.
   + If you want to share the chat agent app snapshot with specific Amazon SageMaker Unified Studio domain users or groups, do the following in **Share with specific users or groups**:

     1. For **Member type** choose **Individual user** or **Group**, depending on the type of member that you want share the chat agent app with.

     1. Search for the users or groups that you want to share the chat agent app with by entering the user name or group in the **Search by alias to invite members** text box.

     1. In the drop down list, select the matching user name or group that want to share the chat agent app with. 

     1. Choose **Add** to add the user or group.

1. Choose **Share** to share the chat agent app. 

1. When the success message appears, choose **Copy link** and send the link to the users that you are sharing the chat agent app snapshot with. If **Grant access with link** is off, the link only works for users that you have explicitly granted access to the chat agent app. 

As the chat agent app creator, you, and other members of the chat agent app project, can make changes to the chat agent app and share a fresh snapshot of the chat agent app. Only the latest snapshot is available to users. Users that you share a snapshot to can't make any changes to the snapshot.

To change who you share the chat agent app with, open the chat agent app, choose **Share** and make your changes. Choose **Done** to complete the changes. If you are sharing the snapshot with all users, turning off **Grant access with link** restricts access to users that you have specifically share the snapshot with.

You can't stop sharing a snapshot without deleting the chat agent app. If you delete the chat agent app, the snapshot is no longer shared and is removed from the Amazon SageMaker AI Catalog. If you want to deny access to everyone, without deleting the chat agent app, edit the snapshot and remove all users and groups. If you shared the snapshot with all users, turn off **Grant access with link**. Note that Amazon SageMaker Unified Studio doesn't remove the snapshot from the Amazon SageMaker AI Catalog. 

If you need the share link later, share the chat agent app again and copy the share link. You can also change the users that you share with the chat agent app with.

To see which chat agent apps you have shared, Open the app project, choose **Asset gallery** and then **My apps**. Check the **Share status** column for the chat agent app.

# Build a flow app with Amazon Bedrock
<a name="create-flows-app"></a>

A flow app let you link prompts, supported foundational models, and other units of work, such as an Amazon Bedrock knowledge base, together and create generative AI workflows for end-to-end solutions. For example, you could create flow apps to do the following. 
+ **Create a playlist of music** – Create a flow connecting a prompt node and knowledge base node. Provide the following prompt to generate a playlist: **Create a playlist**. After processing the prompt, the flow queries a knowledge base to look up information about local bands, such as the length of songs and genre of music. The flow then generates a playlist based the information in the knowledge base.
+ **Troubleshoot using the error message and the ID of the resource that is causing the error** – The flow looks up the possible causes of the error from a documentation knowledge base, pulls system logs and other relevant information about the resource, and updates the faulty configurations and values for the resource.

In this section you create a flow app that generates a playlist of music from a knowledge base of songs by fictional local bands. 

To create a flow app, you use the *flow builder* which is a tool in Amazon Bedrock in SageMaker Unified Studio to build and edit flow apps through a visual interface. You use the visual interface to drag and drop nodes onto the interface and configure inputs and outputs for these nodes to define your flow. 

![\[Example Amazon Bedrock in SageMaker Unified Studio flow.\]](http://docs.aws.amazon.com/sagemaker-unified-studio/latest/userguide/images/bedrock/create-flow-kb-prompt-out.png)


In your flow you can apply logical conditions to direct the output from a node to different destinations. You can then run the flow within Amazon Bedrock in SageMaker Unified Studio and view the output.

The following list introduces you to the basic elements of a flow.
+ **Flow** – A flow is a construct consisting of a name, description, permissions, a collection of nodes, and connections between nodes. When you run a flow, the input to the flow is sent through each node of the flow until the flow emits the final output from an output node.

  
+ **Node** – A node is a step inside a flow. For each node, you configure its name, description, input, output, and any additional configurations. The configuration of a node differs based on its type. 

  For information about the types of nodes that Amazon Bedrock in SageMaker Unified Studio supports, see [Flow nodes available in Amazon Bedrock](nodes.md).
+ **Connection** – There are two types of connections used in flow apps:
  + A **data connection** is drawn between the output of one node (the *source node*) and the input of another node (the *target node*) and sends data from an upstream node to a downstream node. In the flow builder, data connections are solid lines.
  + A **conditional connection** is drawn between a condition in a condition node and a downstream node and sends data from the node that precedes the condition node to a downstream node if the condition is fulfilled. In the flow builder, conditional connections are dotted lines.
+ **Expressions** – An expression defines how to extract an input from the whole input entering a node. To learn how to write expressions, see [Define inputs with expressions](flows-expressions.md).

If you want to use your flow app outside of Amazon SageMaker Unified Studio, you can export and deploy the app to an AWS account. For more information, see [Use your app outside of Amazon SageMaker Unified Studio](app-export.md).

**Warning**  
Generative AI may give inaccurate responses. Avoid sharing sensitive information. Chats may be visible to others in your organization.

**Topics**
+ [Create a flow app with Amazon Bedrock](build-flow.md)
+ [Define inputs with expressions](flows-expressions.md)
+ [Use logic nodes to control flow](flows-logic-nodes.md)
+ [Use a chat agent app in a flow app](flows-use-chat-agent.md)
+ [Flow nodes available in Amazon Bedrock](nodes.md)

# Create a flow app with Amazon Bedrock
<a name="build-flow"></a>

In this section you first build a flow app that generates a playlist of music from an Amazon Bedrock knowledge base of songs by fictional local bands. Next, you use [Reusable prompts](creating-a-prompt.md) to add a prompt that can customizes the playlist for different genres of music. 

**Topics**
+ [Step 1: Create an initial flow app](#build-flow-empty)
+ [Step 2: Add a Knowledge Base to your flow app](#build-flow-kb)
+ [Step 3: Add a prompt to your flow app](#build-flow-prompt)
+ [Step 4: Add a condition to your flow app](#build-flow-condition)

## Step 1: Create an initial flow app
<a name="build-flow-empty"></a>

In this procedure you create an initial flow app which has an [Flow input](nodes.md#flow-node-input) node and a [Flow output](nodes.md#flow-node-output) node. 

A flow contains only one flow input node which is where the flow begins. The flow input node takes your input and passes it to the next node in a data type of your choice (String, Number, Boolean, Object and Array). In these procedures, the input to the flow is a String. To learn more about using different data types in a flow, see [Define inputs with expressions](flows-expressions.md). 

A flow output node extracts the input data from the previous node, based on the defined expression, and outputs the data. A flow can have multiple flow output nodes if there are multiple branches in the flow.

After completing the procedure, the flow app is empty, other than the flow input and flow output nodes. In the next step you add Knowledge Base as a data source and run the flow app for the first time.

While you develop your app, you work on the current draft. You can save the current draft to the app history. Later you might want to restart work from a previous draft. For more information, see [Use app history to view and restore versions of an Amazon Bedrock app](app-history.md).

**To create an initial flow app**

1. Navigate to the Amazon SageMaker Unified Studio landing page by using the URL from your administrator.

1. Access Amazon SageMaker Unified Studio using your IAM or single sign-on (SSO) credentials. For more information, see [Access Amazon SageMaker Unified Studio](getting-started-access-the-portal.md).

1. On the Amazon SageMaker Unified Studio home page, navigate to the **Amazon Bedrock in SageMaker Unified Studio** tile.

   For the **Build chat agent app** button dropdown, select **Build flow**.  
![\[Create Amazon Bedrock in SageMaker Unified Studio flow.\]](http://docs.aws.amazon.com/sagemaker-unified-studio/latest/userguide/images/bedrock/bedrock-ide-build-create-flow.png)

1. In the **Select or create a new project to continue** dialog box, do one of the following:
   + If you want to use a new project, follow the instructions at [Create a new project](create-new-project.md). For the **Project profile** in step 1, choose **Generative AI application development**.
   + If you want to use an existing project, select the project that you want to use and then choose **Continue**. 

1. In the flow builder, choose the flow name (**Untitled flow-nnnn**) and enter **Local bands** as the name for the flow. 

1. In the **flow app builder** pane, select the **Nodes** tab. The center pane displays a **Flow input** node and a **Flow output** node. These are the input and output nodes for your flow. The circles on the nodes are connection points. In the next procedure, you use the connection points to connect a Knowledge Base node to the Flow input node and the Flow output node.   
![\[Input and output nodes in an empty Amazon Bedrock in SageMaker Unified Studio flow app.\]](http://docs.aws.amazon.com/sagemaker-unified-studio/latest/userguide/images/bedrock/bedrock-ide-empty-flow.png)

1. Next step: [Step 2: Add a Knowledge Base to your flow app](#build-flow-kb).

## Step 2: Add a Knowledge Base to your flow app
<a name="build-flow-kb"></a>

In this procedure, you add a [Knowlege Base](nodes.md#flow-node-kb) node as a data source to the flow that you created in [Step 1: Create an initial flow app](#build-flow-empty). The Knowledge Base you add is Comma Seperated Values (CSV) file containing a list of ficticious songs and artists. The list includes the duration (seconds) and genre of each song. For more information about Knowledge Bases, see [Add a Knowledge Base to your Amazon Bedrock app](data-sources.md).

During the procedure, you make connections from the Flow input node to the Knowledge Base node and from the Knowledge Base node to the Flow output node. At some point, you might need to delete a node or remove a node connection. To delete a node, select the node that you want to delete and press the Delete button. To remove a connection, choose the connection that you want to delete and then press the delete button. 

When you run the flow with the input **Create a playlist**, the app creates a playlist using songs only from the Knowledge Base. 

**To create the flow with a Knowledge Base**

1. Create a CSV file name *songs.csv* and fill with the following ficticious CSV data. This is the data source for your Knowledge Base. Save the CSV file to your local computer.

   ```
   song,artist,genre,length-seconds
   "Celestial Odyssey","Starry Renegades","Cosmic Rock",240
   "Neon Rapture","Synthwave Siren","Synthwave Pop",300
   "Wordsmith Warriors","Lyrical Legions","Lyrical Flow",180
   "Nebula Shredders","Galactic Axemen","Cosmic Rock",270
   "Electro Euphoria","Neon Nomads","Synthwave Pop",210
   "Rhythm Renegades","Percussive Pioneers","Lyrical Flow",240
   "Stardust Rift","Cosmic Crusaders","Cosmic Rock",180
   "Synthwave Serenade","Electro Enchanters","Synthwave Pop",300
   "Lyrical Legends","Rhyme Royale","Lyrical Flow",240
   "Supernova Shredders","Amplified Ascension","Cosmic Rock",300
   "Celestial Chords","Ethereal Echoes","Cosmic Rock",240
   "Neon Nirvana","Synthwave Sirens","Synthwave Pop",270
   "Verbal Virtuoso","Lyrical Maestros","Lyrical Flow",210
   "Cosmic Collision","Stellar Insurgents","Cosmic Rock",180
   "Pop Paradox","Melodic Mavericks","Synthwave Pop",240
   "Flow Fusion","Verbal Virtuosos","Lyrical Flow",300
   "Shredding Shadows","Crimson Crusaders","Cosmic Rock",270
   "Synth Serenade","Electro Enchanters","Synthwave Pop",180
   "Wordsmith Warlords","Lyrical Legionnaires","Lyrical Flow",240
   "Sonic Supernova","Amplified Ascension","Cosmic Rock",210
   "Celestial Symphony","Ethereal Ensemble","Cosmic Rock",300
   "Electro Euphoria","Neon Nomads","Synthwave Pop",180
   "Lyrical Legends","Rhyme Royale","Lyrical Flow",270
   "Crimson Crescendo","Scarlet Serenaders","Cosmic Rock",240
   "Euphoric Tides","Melodic Mystics","Synthwave Pop",210
   "Rhythm Renegades","Percussive Pioneers","Lyrical Flow",180
   "Cosmic Collision","Stellar Insurgents","Cosmic Rock",300
   "Stardust Serenade","Celestial Crooners","Synthwave Pop",240
   "Wordsmith Warriors","Lyrical Legions","Lyrical Flow",270
   "Sonic Supernova III","Amplified Ascension","Cosmic Rock",180
   ```

1. Open the flow app that you created in [Step 1: Create an initial flow app](#build-flow-empty).

1. Add and configure a Knowledge Base node by doing the following:

   1. In the **flow app builder** pane, select the **Nodes** tab.

   1. From the **Data** section, drag a **Knowledge Base** node onto the flow builder canvas.

   1. The circles on the nodes are connection points. Using your mouse, click on the circle for the **Flow input** node and draw a line to the circle on **Input** section of the Knowledge Base node that you just added. 

   1. Connect the **Output** of the Knowledge Base node in your flow with the **Input** of the **Flow output** node.   
![\[Knowledge Base node in an Amazon Bedrock in SageMaker Unified Studio flow app.\]](http://docs.aws.amazon.com/sagemaker-unified-studio/latest/userguide/images/bedrock/bedrock-ide-build-prompt-flow-kb.png)

   1. Select the Knowledge Base node that you just added. 

   1. In the **flow builder** pane, choose the **Configure** tab and do the following:

      1. In **Node name** enter **Local\$1bands\$1knowledge\$1base**.

      1. In **Knowledge Base Details**, choose **Create new Knowledge Base** to open the **Create Knowledge Base** pane.

      1. For **Knowledge Base name**, enter **Local-bands**.

      1. For **Knowledge Base description**, enter **Songs by local bands. Includes song, artist, genre, and song length (in seconds).**.

      1. In **Add data sources**, choose **Local file**.

      1. Choose **Click to upload** and upload the CSV file (songs.csv) that you created in step 1. Alternatively, add your source document by dragging and dropping the CSV from your computer.

      1. For **Parsing** leave as **Default parsing**.

      1. For **Embeddings model**, choose a model for converting your data into vector embeddings.

      1. For **Vector store**, choose **OpenSearch Serverless**.

      1. Choose **Create** to create the Knowledge Base. It might take a few minutes to create the Knowledge Base.

   1. Back in the **flow builder** pane, in **Select Knowledge Base**, select the Knowledge Base that you just created (Local-bands).

   1. In **Select response generation model**, select the model that you want the Knowledge Base to generate responses with.

   1. (Optional) In **Select guardrail** select an existing guardrail or create a new guardrail. For more information, see [Safeguard your Amazon Bedrock app with a guardrail](guardrails.md).

1. Choose **Save** to save the app.

1. Test your prompt by doing the following:

   1. On right side of the page, choose **<** to open the test pane.

   1. Enter **Create a playlist** in the prompt **Text** box.

   1. Press Enter on the keyboard or choose the run button to test the prompt.

   1. If necessary, make changes to your flow. If you are satisfied with the flow, choose **Save**. 

1. Next step: [Step 3: Add a prompt to your flow app](#build-flow-prompt).

## Step 3: Add a prompt to your flow app
<a name="build-flow-prompt"></a>

In this procedure you add a prompt to the flow by adding a [prompt node](nodes.md#flow-node-prompt). The prompt allows you to easily choose which genre of songs should be included in the playlist that the flow generates. For more information, see [Reuse and share Amazon Bedrock prompts](prompt-mgmt.md).

**To add a prompt to the flow**

1. In the **flow builder** pane, select **Nodes**.

1. From the **Orchestration** section, drag a **Prompt** node onto the flow builder canvas.

1. Select the node you just added. 

1. In the **Configurations** tab of the **flow builder** pane, do the following:

   1. In **Node name** enter **Playlist\$1generator\$1node**.

   1. In **Prompt details** choose **Create new prompt** to open the **Create prompt** pane.

   1. For **Prompt name** enter **Playlist\$1generator\$1prompt**.

   1. For **Model**, choose the model that you want the prompt to use. 

   1. For **Prompt message** enter **Create a playlist of songs in the genre \$1\$1genre\$1\$1.**. 

   1. (Optional) In **Model configs**, make changes to the inference parameters or provide system instruction prompts.

   1. Choose **Save draft and create version** to create the prompt. It might take a couple of minutes to finish creating the prompt.

1. In the flow builder, choose the prompt node that you just added.

1. In the **Configure** tab, do the following in the **Prompt details** section: 

   1. In **Prompt** select the prompt that you just created.

   1. In **Version** select the version (**1**) of the prompt to use.

1. (Optional) In **Select guardrail** select an existing guardrail. For more information, see [Safeguard your Amazon Bedrock app with a guardrail](guardrails.md).

1. Update the flow paths by doing the following:

   1. Delete the output from the **Knowledge Base** node that goes into the **Flow output**.

   1. Connect the output from the **Knowledge Base** node to the input of the **Prompts** node. 

   1. Connect the output from the **Prompts** node to the input of the **Flow output** node. 

1. Choose **Save** to save the flow. The flow should look similar to the following.  
![\[Knowledge Base and prompt node in an Amazon Bedrock in SageMaker Unified Studio flow app.\]](http://docs.aws.amazon.com/sagemaker-unified-studio/latest/userguide/images/bedrock/create-flow-kb-prompt-out.png)

1. Test your prompt by doing the following:

   1. On right side of the app flow page, choose **<** to open the test pane.

   1. For the **Text** box, enter **Cosmic Rock**. 

   1. Press Enter on the keyboard or choose the run button to test the prompt. The response should be a playlist of songs in the Cosmic Rock genre.

   1. Change the prompt to **Synthwave Pop** and run the prompt again. The songs should now be from the Synthwave Pop genre.

   1. If necessary, make changes to your flow. If you are satisfied with the flow, choose **Save**. 

1. Next step: [Step 4: Add a condition to your flow app](#build-flow-condition).

## Step 4: Add a condition to your flow app
<a name="build-flow-condition"></a>

In this procedure, you add a [condition](nodes.md#flow-node-condition) node to the flow so that if you enter the prompt **Cosmic Rock**, the flow only generates a playlist from the local bands Knowledge Base. If you enter a different genre, the flows uses the playlist generator prompt to create a playlist of well known artists in that genre.

**To add a condition to the flow**

1. In the **flow app builder** pane, choose **Nodes**.

1. From the **Logic** section, drag a **Condition** node onto the flow builder canvas.

1. Select the **Condition** node that you just added. 

1. Add the flow that generates a playlist from local bands by doing the following:

   1. In the **Inputs** section of the **Configurations** tab, change the **Node Name** to **Local\$1cosmic\$1rock\$1node**. 

   1. In the **Inputs** section, change the **Name** to **genre**. 

   1. In the **Conditions** section, do the following:

      1. For **Name**, enter **Local\$1cosmic\$1rock**.

      1. For **Condition**, enter the condition **genre == "Cosmic Rock"**.

   1. In the flow builder, choose the condition node that you just added.

   1. Connect **Go to node** to the **Knowledge base** node.

   1. Connect the **Output** of the **Flow input** node to the **Input** of the **Condition** node. Leave the existing connection to the **Knowledge Base** node as this ensures the prompt is passed to the Knowledge Base.  
![\[Condition node in an Amazon Bedrock in SageMaker Unified Studio flow app.\]](http://docs.aws.amazon.com/sagemaker-unified-studio/latest/userguide/images/bedrock/bedrock-ide-build-prompt-local-condition.png)

1. Choose **Save** to save your flow app.

1. Add the flow that generates a playlist by well known bands by doing the following:

   1. In the **flow app builder** pane, select **Nodes**.

   1. From the **Orchestration** section, drag a **Prompt** node onto the flow builder canvas.

   1. Select the node you just added. 

   1. Choose the **Configurations** tab of the **flow builder** pane and do the following:

      1. For **Node name**, enter **Well\$1known\$1artist\$1playlist\$1generator\$1node**.

      1. In **Prompt details** section, choose the **Playlist\$1generator\$1prompt** prompt that you previously created.

      1. For **Version**, select the version (**1**) of the prompt to use.

      1. Connect the **Output** from the **Flow input** node to the **Input** of the prompt that you just created.

      1. In the **Condition** node, connect the **If all conditions are false** go to node to the new prompt. 

      1. In the **flow app builder** pane, select **Nodes**.

      1. From the **Other** section, drag a **Flow output** node onto the flow builder canvas.

      1. Connect the **Output** of the new **Prompt** (Well\$1known\$1artist\$1playlist\$1generator\$1node) to the input of the new **Flow output** node.

1. Choose **Save** to save the flow. The flow should look similar to the following.  
![\[Knowledge Base, prompt, and condition node in an Amazon Bedrock in SageMaker Unified Studio flow app.\]](http://docs.aws.amazon.com/sagemaker-unified-studio/latest/userguide/images/bedrock/create-flow-kb-prompt-condition-out.png)

1. Test your prompt by doing the following:

   1. On right side of the app flow page, choose **<** to open the test pane.

   1. In **Enter prompt**, enter **Cosmic Rock**.

   1. Press Enter on the keyboard or choose the run button to test the prompt. The response should be a playlist of songs in the Cosmic Rock genre with bands that are only from the Knoweledge Base.

   1. Change the prompt to **Classic Rock** and run the prompt again. The songs should now be well known bands from the classic rock genre.

# Define inputs with expressions
<a name="flows-expressions"></a>

When you configure the inputs for a node, you must define it in relation to the whole input that will enter the node. The whole input can be a string, number, boolean, array, or object. To define an input in relation to the whole input, you use a subset of supported expressions based off [JsonPath](https://github.com/json-path/JsonPath). Every expression must begin with `$.data`, which refers to the whole input. Note the following for using expressions:
+ If the whole input is a string, number, or boolean, the only expression that you can use to define an individual input is `$.data`
+ If the whole input is an array or object, you can extract a part of it to define an individual input.

As an example to understand how to use expressions, let's say that the whole input is the following JSON object:

```
{
    "animals": {
        "mammals": ["cat", "dog"],
        "reptiles": ["snake", "turtle", "iguana"]
    },
    "organisms": {
        "mammals": ["rabbit", "horse", "mouse"],
        "flowers": ["lily", "daisy"]
    },
    "numbers": [1, 2, 3, 5, 8]
}
```

You can use the following expressions to extract a part of the input (the examples refer to what would be returned from the preceding JSON object):


****  

| Expression | Meaning | Example | Example result | 
| --- | --- | --- | --- | 
| \$1.data | The entire input. | \$1.data | The entire object | 
| .name | The value for a field called name in a JSON object. | \$1.data.numbers | [1, 2, 3, 5, 8] | 
| [int] | The member at the index specified by int in an array. | \$1.data.animals.reptiles[2] | turtle | 
| [int1, int2, ...] | The members at the indices specified by each int in an array. | \$1.data.numbers[0, 3] | [1, 5] | 
| [int1:int2] | An array consisting of the items at the indices between int1 (inclusive) and int2 (exclusive) in an array. Omitting int1 or int2 is equivalent to the marking the beginning or end of the array. | \$1.data.organisms.mammals[1:] | ["horse", "mouse"] | 
| \$1 | A wildcard that can be used in place of a name or int. If there are multiple results, the results are returned in an array. | \$1.data.\$1.mammals | [["cat", "dog"], ["rabbit", "horse", "mouse"]] | 

The following procedure shows how to use expressions to identify fields in a JSON object that you send to a prompt node. The prompt generates a playlist of songs. The JSON object you pass to the flow identifies the number of songs that you want in the playlist and the genre of music that you want the songs to represent. For example, enter the following JSON object to request a playlist of 3 songs in the pop genre.

**\$1 "genre": "Pop", "number": 3 \$1**

**To use an expression**

1. Create an empty flow app by doing [Step 1: Create an initial flow app](build-flow.md#build-flow-empty).

1. In the flow builder, choose the **Flow input** node.

1. In the **flow builder** pane choose the **Configure** tab.

1. In **Outputs** section, choose **Type** and then select **Object**.

1. In the **flow builder** pane, select **Nodes**.

1. From the **Orchestration** section, drag a **Prompt** node onto the flow builder canvas.

1. Select the node you just added. 

1. In the **Configurations** tab of the **flow builder** pane, do the following:

   1. For **Node name**, enter **playlist\$1songs\$1genre\$1node**.

   1. In **Prompt details** choose **Create new prompt** to open the **Create prompt** pane.

   1. For **Prompt name**, enter **playlist\$1songs\$1genre\$1prompt**.

   1. For **Model**, choose the model that you want the prompt to use. 

   1. For **Prompt message** enter **Create a playlist of \$1\$1number\$1\$1 songs that are in the \$1\$1genre\$1\$1 genre of music.**. 

   1. (Optional) In **Model configs**, make changes to the inference parameters.

   1. Choose **Save draft and create version** to create the prompt. It might take a couple of minutes to finish creating the prompt.

1. In the flow builder, choose the prompt node that you just added.

1. Choose the **Configure** tab and do the following in the **Prompt details** section: 

   1. For **Prompt**, select the prompt that you just created (**playlist\$1songs\$1genre\$1prompt**).

   1. For **Version**, select the version (**1**) of the prompt to use.

   1. For the **number** input in the **Inputs** section, do the following:

      1. Change the value of **Type** to **Number**.

      1. Change the value of **Expression** to **\$1.data.number**.

   1. For the **genre** input in the **Inputs** section, do the following:

      1. Make sure the value of **Type** is **String**. 

      1. Change the expression for the input to **\$1.data.genre**.  
![\[Input expressions for a JSON object passed to a prompt node in an Amazon Bedrock in SageMaker Unified Studio flow app.\]](http://docs.aws.amazon.com/sagemaker-unified-studio/latest/userguide/images/bedrock/create-flow-json-expression-configure.png)

1. Connect the output from **Flow input** node to the input **number** of the Prompt node. 

1. Connect the output from **Flow input** node to the input **genre** of the Prompt node. 

1. Connect the output from the prompt node to the input of the **Flow output** node. 

1. Choose **Save** to save the flow. The flow should look similar to the following.  
![\[JSON input to a prompt node in an Amazon Bedrock in SageMaker Unified Studio flow app.\]](http://docs.aws.amazon.com/sagemaker-unified-studio/latest/userguide/images/bedrock/create-flow-prompt-expression-json.png)

1. Test your prompt by doing the following:

   1. On the right side of the page, choose **<** to open the **Test** pane.

   1. Enter the following JSON in the **Enter prompt** text box.

      ```
      { 
          "genre": "Pop",
          "number": 3 
      }
      ```

   1. Press Enter on your keyboard or choose the run button to test the prompt. The response should be a playlist of 3 songs in the pop music genre.

# Use logic nodes to control flow
<a name="flows-logic-nodes"></a>

Within an Amazon Bedrock in SageMaker Unified Studio flow app you can use logic flows to control how the flow processes input. 

The [condition](nodes.md#flow-node-condition) node lets you change the flow of processing based on values passed to the node. For example, suppose you have a flow that creates music playlists. You can use a condition node to direct requests for local artists to a sub-flow that uses a knowledge base of local artist information. For national artists, the knowledge base wouldn't be needed and a different flow path can be used. For an example, see [Step 4: Add a condition to your flow app](build-flow.md#build-flow-condition).

 You can also use the [iterator](nodes.md#flow-node-iterator) and [collector](nodes.md#flow-node-collector) nodes to process arrays of information. For example, a radio station might want descriptions and song suggestions for a list of artists. With a flow, you can send a list (Array) of artists to an iterator node which then passes each artist to a prompt. The prompt processes the artists in the array, one at time, to get the required descriptions and song suggestions. The collector node collects the results of the prompt as an array which can then be sent to other nodes.

The following procedure shows how to use iterator and collector nodes to generate descriptions for each artist in a list. The flow also generates a suggested popular, and less popular, song for the artist. When you run the flow, you supply the list of artists as an array, such as the following. 

**["Stereophonics", "Manic Street Preachers"]**

**To get artist descriptions**

1. Create an empty flow app by doing [Step 1: Create an initial flow app](build-flow.md#build-flow-empty). In step 5, name the app **band info**.

1. On the flow canvas, select the **Flow input** node.

1. In the **flow builder** pane choose the **Configure** tab.

1. In **Outputs** section, choose **Type** and then select **Array**.

1. In the **flow app builder** pane, select **Nodes**.

1. From the **Logic** section, drag an **Iterator** node onto the builder canvas.

1. Select the **Iterator** node.

1. In the **Configure** tab of the **flow builder** pane, do the following:

   1. For **Node name**, enter **artist\$1list**.

   1. In the **Output** section, make sure the **Type** for **arrayItem** is **String**.

   1. In the **Output** section, make sure the **Type** for **arraySize** is **Number**.

1. On the canvas, connect **document** from the output of the Flow input node to the **array** input of the **Iterator** node.

1. In the **flow app builder** pane, select **Nodes**.

1. From the **Orchestration** section, drag a **Prompt** node onto the flow builder canvas.

1. Select the **Prompt** node.

1. In the **Configure** tab of the **flow app builder** pane, do the following:

   1. For **Node name**, enter **artist\$1description**.

   1. In **Prompt details** choose **Create new prompt** to open the **Create prompt** pane.

   1. For **Prompt name**, enter **get\$1artist\$1description\$1prompt**.

   1. For **Model**, choose the model that you want the prompt to use. 

   1. For **Prompt message** enter the following:

      ```
      Give a one sentence description about the music played by the artist {{artist}}. Format your response as follows: 
                          
      Artist : the artist name 
      Description :  the artist description
      Popular song : a popular song by the artist
      Deep cut_song :  a less well known song by the artist
      ```

   1. (Optional) In **Model configs**, make changes to the inference parameters.

   1. Choose **Save draft and create version** to create the prompt. It might take a couple of minutes to finish creating the prompt.

1. In the **flow app builder** pane, select **Nodes**.

1. From the **Logic** section, drag a **Collector** node onto the canvas.

1. Select the **Collector node** on the canvas.

1. In the **Configure** tab of the **flow builder** pane, do the following:

   1. For **Node name**, enter **artist\$1descriptions**.

1. Select the **Iterator** node on the canvas, and do the following: 

   1. Connect **arrayItem** to the **artist** input of the Prompt. 

   1. Connect **arraySize** to the **arraySize** input of the Collector node. 

1. In the **Prompt** node, connect **modelCompletion** to the **arrayItem** input of the **Collector** node.

1. In the **Collector** node, connect the **collectedArray** output to the **document** input of the **Flow output** node.

1. Select the **Output** node on the canvas., and do the following: 

1. In the **Configure** tab of the **flow app builder** pane, do the following:

   1. In the **Outputs** section, change the type of **collectedArray** to **Array**.

1. Choose **Save** to save the flow. The flow should look similar to the following.  
![\[Amazon Bedrock in SageMaker Unified Studio flow app with iterator and collector logic nodes.\]](http://docs.aws.amazon.com/sagemaker-unified-studio/latest/userguide/images/bedrock/bedrock-ide-flow-logic.png)

1. Test your flow by doing the following:

   1. On the right side of the page, choose **<** to open the **Test** pane.

   1. Enter the following JSON in the **Enter prompt** text box.

      ```
      ["Stereophonics", "Manic Street Preachers"]
      ```

   1. Press Enter on your keyboard or choose the run button to test the prompt. The response should be an array of artists with a descriptions and suggested songs for each artist.

# Use a chat agent app in a flow app
<a name="flows-use-chat-agent"></a>

You can use a chat agent app in a flow app by adding an [agent node](nodes.md#flow-node-agent) that references the chat agent app.

Before you can use an chat agent app in a flow, you must first [deploy](app-deploy.md) the chat agent app. Deployment creates an alias for the chat agent app and a new version of the chat agent app. In your flow, you add an agent node and configure the agent node to reference the alias of the chat agent app.

The following procedure shows how to integrate a chat agent app as an agent node in a flow app. After completing this procedure you can add other nodes to the flow, as necessary for your solution.

**To use a chat agent app in an agent node**

1. Create a chat agent app by following instructions at [Build a chat agent app with Amazon Bedrock](create-chat-app.md). For this procedure, you only need to do [Step 1: Create the initial chat agent app](create-chat-app-with-components.md#chat-app-create-app), but you can complete the other steps, as desired.

1. Deploy the chat agent app by following the instructions at [Deploy an Amazon Bedrock chat agent app](app-deploy.md).

1. Create an empty flow app by following the instructions at [Step 1: Create an initial flow app](build-flow.md#build-flow-empty). Don't do the subsequent steps on that page.

1. In the **flow app builder** pane, select the **Nodes** tab.

1. From the **Orchestration** section, drag an **Agent** node onto the flow builder canvas.

1. In the flow builder, select the **Agent** node, if it isn't already selected. 

1. In the **flow builder** pane choose the **Configure** tab.

1. For **Node name**, enter a name for the agent node.

1. For **Chat agent**, select the name of the chat agent app that you want to use.

1. For **Agent**, select the alias for the chat agent app that you want to use.

1. Connect the **Input** of the Agent node with the **output** of the **Flow input** node. 

1. Connect the **Output** of the Agent node with the **Input** of the **Flow output** node. 

1. Choose **Save** to save the flow. The flow should look similar to the following image.  
![\[Agent node in an Amazon Bedrock in SageMaker Unified Studio flow app.\]](http://docs.aws.amazon.com/sagemaker-unified-studio/latest/userguide/images/bedrock/bedrock-ide-flow-agent-node.png)

1. Test your prompt by doing the following:

   1. On the right side of the page, choose **<** to open the **Test** pane.

   1. In the **Enter prompt** text box, enter **Create a playlist of pop music**.

   1. Press Enter on your keyboard or choose the run button to test the prompt. The response should be a playlist music in the pop music genre.

# Flow nodes available in Amazon Bedrock
<a name="nodes"></a>

Amazon Bedrock in SageMaker Unified Studio provides the following node types to build your flow app. A node comprises of the following:
+ Name – The name for the node.
+ Type – the type of the node. For more information, see [Flow nodes available in Amazon Bedrock](#nodes).
+ Inputs – Provide a name and data type for each input. Some nodes have pre-defined names or types that you must use. In the expression field, define the part of the whole input to use as the individual input. For more information, see [Define inputs with expressions](flows-expressions.md).

   In the flow builder, an input appears as a circle on the left edge of a node. Connect each input to an output of an upstream node.
+ Outputs – Provide a name and data type for each output. Some nodes have pre-defined names or types that you must use. In the flow builder, an output appears as a circle on the right edge of a node. Connect each output to at least one input in a downstream node. If an output from a node is sent to more than one node, or if a condition node is included, the path of a flow will split into multiple branches. Each branch can potentially yield another output in the flow response.
+ Configuration – You define node-specific fields at the top of the node. 

**Note**  
Amazon Bedrock in SageMaker Unified Studio supports a subset of the nodes that are available in Amazon Bedrock. For more information, see [Node types in flow](https://docs.aws.amazon.com/bedrock/latest/userguide/flows-nodes.html). 

**Topics**
+ [Input node](#flow-node-input)
+ [Output node](#flow-node-output)
+ [Collector node](#flow-node-collector)
+ [Condition node](#flow-node-condition)
+ [Iterator node](#flow-node-iterator)
+ [Iterator node outputs](#flow-node-iterator-outputs)
+ [Prompt node](#flow-node-prompt)
+ [Knowledge Base node](#flow-node-kb)
+ [Agent node](#flow-node-agent)
+ [S3 storage node](#flow-node-s3-storage)
+ [S3 retrieval node](#flow-node-s3-retrieval)
+ [Adding an Amazon S3 bucket](#adding-s3-bucket)

## Input node
<a name="flow-node-input"></a>

Every flow contains only one flow input node and must begin with it. When you run the flow, the input is fed into this node and the configured output is passed to the next step.

### Input node inputs
<a name="flow-node-input-inputs"></a>


| Name | Type | Expression | 
| --- | --- | --- | 
|  N/A  |  N/A  |  N/A  | 

### Input node outputs
<a name="flow-node-input-outputs.title"></a>


| Name | Type | 
| --- | --- | 
|  document  |  String, Number, Boolean, Object and Array.   | 

## Output node
<a name="flow-node-output"></a>

A flow output node extracts the input data from the previous node, based on the defined expression, and returns it. A flow can have multiple flow output nodes if there are multiple branches in the flow.

### Output node inputs
<a name="flow-node-output-inputs"></a>


| Name | Type | Expression | 
| --- | --- | --- | 
|  document  |  String, Number, Boolean, Object, and Array.   | Yes | 

### Input node outputs
<a name="flow-node-output-outputs.title"></a>


| Name | Type | 
| --- | --- | 
|  N/A  |  N/A  | 

## Collector node
<a name="flow-node-collector"></a>

A collector node takes an iterated input, in addition to the size that the array will be, and returns them as an array. You can use a collector node downstream from an iterator node to collect the iterated items after sending them through some nodes.

### Collector inputs
<a name="flow-node-collector-inputs"></a>


| Name | Type | Expression | 
| --- | --- | --- | 
|  arrayItem  |  String \$1 Number \$1 Boolean \$1 Object \$1 Array  |  Yes  | 
|  arraySize  |  Number  |  Yes  | 

### Collector outputs
<a name="flow-node-collector-outputs"></a>


| Name | Type | 
| --- | --- | 
|  collectedArray  |  Array  | 

## Condition node
<a name="flow-node-condition"></a>

A condition node sends data from the previous node to different nodes, depending on the conditions that are defined. A condition node can take multiple inputs.
+ **Node name** – Any
+ **Input field name** – Any
+ **Input field types** – String, Number, Boolean, Object and Array. 
+ **Input expression** – Yes 
+ **Condition field name** – Any
+ **Output field types** – String, Number, Boolean, Object and Array. 
+ **Output expression** – Yes 

### Condition expressions
<a name="flows-node-condition-expr"></a>

To define a condition, you refer to an input by its name and compare it to a value using any of the following relational operators:


****  

| Operator | Meaning | Supported data types | Example usage | Example meaning | 
| --- | --- | --- | --- | --- | 
| == | Equal to (the data type must also be equal) | String, Number, Boolean | A == B | If A is equal to B | 
| \$1= | Not equal to | String, Number, Boolean | A \$1= B | If A isn't equal to B | 
| > | Greater than | Number | A > B | If A is greater than B | 
| >= | Greater than or equal to | Number | A >= B | If A is greater than or equal to B | 
| < | Less than | Number | A < B | If A is less than B | 
| <= | Less than or equal to | Number | A <= B | If A is less than or equal to B | 

You can compare inputs to other inputs or to a constant in a conditional expression. For example, if you have a numerical input called `profit` and another one called `expenses`, both **profit > expenses** or **profit <= 1000** are valid expressions.

You can use the following logical operators to combine expressions for more complex conditions. We recommend that you use parentheses to resolve ambiguities in grouping of expressions:


****  

| Operator | Meaning | Example usage | Example meaning | 
| --- | --- | --- | --- | 
| and | Both expressions are true | (A < B) and (C == 1) | If both expressions are true: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/sagemaker-unified-studio/latest/userguide/nodes.html) | 
| or | At least one expression is true | (A \$1= 2) or (B > C) | If either expressions is true: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/sagemaker-unified-studio/latest/userguide/nodes.html) | 
| not | The expression isn't true | not (A > B) | If A isn't greater than B (equivalent to A <= B) | 

## Iterator node
<a name="flow-node-iterator"></a>

An iterator node takes an array and iteratively returns its items as output to the downstream node. The inputs to the iterator node are processed one by one and not in parallel with each other. The flow output node returns the final result for each input in a different response. You can use also use a collector node downstream from the iterator node to collect the iterated responses and return them as an array, in addition to the size of the array.

### Iterator node inputs
<a name="flow-node-iterator-inputs"></a>


| Name | Type | Expression | 
| --- | --- | --- | 
|  array  |  Array  |  Yes  | 

## Iterator node outputs
<a name="flow-node-iterator-outputs"></a>


| Name | Type | 
| --- | --- | 
|  arrayItem  |  String \$1 Number \$1 Boolean \$1 Object \$1 Array  | 
|  arraySize  |  Number  | 

## Prompt node
<a name="flow-node-prompt"></a>

A prompt node defines a prompt to use in the flow. The inputs to the prompt node are values to fill in the variables that you define for the prompt. The output is the generated response from the model. For more information, see [Reuse and share Amazon Bedrock prompts](prompt-mgmt.md).
+ **Node name** – Any
+ **Prompt** – The [prompt](prompt-mgmt.md) that the prompt node uses.
+ **Version** – The [prompt](prompt-mgmt.md) the version of the prompt to use.

You can assign a guardrail to a prompt node. When you create the prompt node, you can choose to create a new guardrail or select an existing guardrail. For more information, see [Safeguard your Amazon Bedrock app with a guardrail](guardrails.md).

### Prompt node inputs
<a name="flow-node-prompt-inputs"></a>


| Name | Type | Expression | 
| --- | --- | --- | 
|  Any  |  String, Number, Boolean, Object and Array.  |  Yes  | 

### Prompt node outputs
<a name="flow-node-prompt-output"></a>


| Name | Type | 
| --- | --- | 
|  modelCompletion  |  String  | 

## Knowledge Base node
<a name="flow-node-kb"></a>

A knowledge base node lets you send a query to a knowledge base and get response that the flow sends to the next node. For more information, see [Use a Local file as a data source](data-source-document.md).
+ **Node name** – Any
+ **Knowledge base** – The [Knowledge Base](data-source-document.md) that the node uses.
+ **Response type** – The model that the node uses to generate a response.

### Knowledge base node inputs
<a name="flow-node-kb-inputs"></a>


| Name | Type | Expression | 
| --- | --- | --- | 
|  retrievalQuery  |  String  |  Yes  | 

### Knowledge base node outputs
<a name="flow-node-kb-output"></a>


| Name | Type | 
| --- | --- | 
|  outputText  |  String  | 

You can assign a guardrail to a knowledge base node. When you create the knowledge base node, you can choose to create a new guardrail or select an existing guardrail. For more information, see [Safeguard your Amazon Bedrock app with a guardrail](guardrails.md).

## Agent node
<a name="flow-node-agent"></a>

An agent node lets you send a prompt to an agent, which orchestrates between foundation models and associated resources to identify and carry out actions for an end-user. The inputs into the node are the prompt for the agent and any associated prompt or session attributes. For more information, see [Control agent session context](https://docs.aws.amazon.com/bedrock/latest/userguide/agents-session-state.html) in the *Amazon Bedrock user guide*.

### Agent node inputs
<a name="flow-node-agent-inputs"></a>


| Name | Type | Expression | 
| --- | --- | --- | 
|  agentInputText  |  String, Number, Boolean, Object, Array  |  string  | 
|  promptAttributes  |  Object  |  string  | 
|  sessionAttributes  |  Object  |  string  | 

### Agent node outputs
<a name="flow-node-agent-outputs.title"></a>


| Name | Type | 
| --- | --- | 
|  agentResponse  |  String  | 

## S3 storage node
<a name="flow-node-s3-storage"></a>

An Amazon S3 Storage Node takes an object key and flow data as input. The flow data is stored in an Amazon S3 object specified by the object key, and the node returns the Amazon S3 URI where the data is stored. The Amazon S3 bucket can be specified in the configuration settings of the node.

### S3 storage node inputs
<a name="flow-node-s3-storage-inputs"></a>


| Name | Type | Expression | 
| --- | --- | --- | 
|  content  |  String, Number, Boolean, Object, Array  |  Yes  | 
|  objectKey  |  String  |  Yes  | 

### S3 storage node outputs
<a name="flow-node-s3-storage-outputs.title"></a>


| Name | Type | 
| --- | --- | 
|  s3Uri  |  String  | 

## S3 retrieval node
<a name="flow-node-s3-retrieval"></a>

An Amazon S3 Retrieval Node takes an object key as input and returns the data from the Amazon S3 object specified by the object key. The Amazon S3 bucket can be specified in the configuration settings of the node.

### S3 retrieval node inputs
<a name="flow-node-s3-retrieval-inputs"></a>


| Name | Type | Expression | 
| --- | --- | --- | 
|  objectKey  |  String  |  Yes  | 

### S3 retrieval node outputs
<a name="flow-node-s3-retrieval-outputs.title"></a>


| Name | Type | 
| --- | --- | 
|  s3Content  |  String  | 

## Adding an Amazon S3 bucket
<a name="adding-s3-bucket"></a>

To use S3 storage and retrieval flow nodes, you must first set up a connection with an Amazon S3 bucket. Amazon S3 buckets that are already within your project can be connected to automatically through a default S3 connection. If you want to use an S3 bucket from outside of your project, you will need to set up an S3 connection using the steps below.

**To connect to an S3 bucket outside of your project:**

1. Navigate to the data section under your project overview.

1. Select **Add**.

1. Select **Add S3 location**, then select **Next**.

1. Follow the steps for Prerequisite option 2 in [Adding Amazon S3 data](adding-existing-s3-data.md) to create the S3 connection and configure the correct permissions. Alternatively, you can use the following JSON policy to add permissions for an S3 storage node:

   ```
   {
       "Sid": "WriteToS3Bucket",
       "Effect": "Allow",
       "Action": [
           "s3:PutObject"
       ],
       "Resource": [
           "arn:aws:s3:::${bucket-name}",
           "arn:aws:s3:::${bucket-name}/*"
       ],
       "Condition": {
           "StringEquals": {
               "aws:ResourceAccount": "${account-id}"
           }
       }
   }
   ```

   And the following JSON policy to add permissions for an S3 retrieval node:

   ```
   {
       "Sid": "AccessS3Bucket",
       "Effect": "Allow",
       "Action": [
           "s3:GetObject"
       ],
       "Resource": [
           "arn:aws:s3:::${bucket-name}/*"
       ],
       "Condition": {
           "StringEquals": {
               "aws:ResourceAccount": "${account-id}"
           }
       }
   }
   ```

# Reuse and share Amazon Bedrock prompts
<a name="prompt-mgmt"></a>

You can create and manage reusable prompts for use in a flow app or you can share them with other users. With a flow app you can pre-configure a prompt for a flow, by choosing the model and inference parameters that the model uses. You can also customize the prompt for different use cases by using variables. For example, you could have a prompt that creates a playlist of songs about topics that a user chooses. You can also share prompts that you create with other users. 

**Topics**
+ [Create an Amazon Bedrock prompt](creating-a-prompt.md)
+ [Add an Amazon Bedrock prompt to a flow app](add-prompt-to-prompt-flow-app.md)
+ [Modify an Amazon Bedrock prompt](editing-a-prompt.md)
+ [Delete an Amazon Bedrock prompt](deleting-a-prompt.md)
+ [Share an Amazon Bedrock prompt version](sharing-a-prompt.md)

# Create an Amazon Bedrock prompt
<a name="creating-a-prompt"></a>

When you create a prompt, you select a model for it and can modify inference parameters. To adjust the prompt for different use cases, you can include up to 5 variables. 

You define variables in a prompt by surrounding them in double curly braces `{{variable}}`. For example, the following prompt defines two variables, `topic` and `location`. 

*Generate a playlist of songs about \$1\$1`topic`\$1\$1. Make sure each song is by artists from \$1\$1`location`\$1\$1.*

When you run the prompt, you supply values for the variables. Amazon Bedrock in SageMaker Unified Studio fills the prompt with the variable values and then passes the prompt to the model. For example, if you supply a `topic` value of *castle* and a `location` value of *Wales*, the model generates a playlist of songs about castles by Welsh artists.

You initially create a draft of your prompt. You can then test your prompt by inputing test values for the variables and running the prompt. These values are only for temporary testing and aren't saved to your prompt.

You can create variants of your prompt that use different messages, models, or configurations so that you can compare their outputs to decide the best variant for your use case.

 When you are ready, you can create a version of your prompt for use in a flow app. You can create multiple versions of a prompt, but you can only [edit](editing-a-prompt.md) the latest version. When you [delete](deleting-a-prompt.md) a prompt, it deletes all versions of the prompt.

**Warning**  
Generative AI may give inaccurate responses. Avoid sharing sensitive information. Chats may be visible to others in your organization.

**To create a prompt**

1. Navigate to the Amazon SageMaker Unified Studio landing page by using the URL from your administrator.

1. Access Amazon SageMaker Unified Studio using your IAM or single sign-on (SSO) credentials. For more information, see [Access Amazon SageMaker Unified Studio](getting-started-access-the-portal.md).

1. On the Amazon SageMaker Unified Studio home page, navigate to the **Generative AI app development** tile.

   For the **Build chat agent app** button dropdown, select **Build prompt**. You can also create a prompt from the **Build** menu at the top of the page.  
![\[Amazon Bedrock in SageMaker Unified Studio tile.\]](http://docs.aws.amazon.com/sagemaker-unified-studio/latest/userguide/images/bedrock/bedrock-ide-build-prompt.png)

1. In the **Select or create a new project to continue** dialog box, do one of the following:
   + If you want to use a new project, follow the instructions at [Create a new project](create-new-project.md). For the **Project profile** in step 1, choose **Generative AI application development**.
   + If you want to use an existing project, select the project that you want to use and then choose **Continue**. 

1. Choose the prompt name (**Untitled Prompt-nnnn**) and enter a name for the prompt. 

1. In the **Configs** section, do the following:

   1. For **Model**, select the model that you want to use.

   1. (Optional) In **Parameters**, set the inference parameters values that you want to use. If you don't make changes, the prompt uses the default values for the model. For more information, see [Inference parameters](explore-prompts.md#inference-parameters).

   1. (Optional) In **System instructions**, enter any overarching system prompts that you want the model to apply for future interactions. For more information, see [System instructions](explore-prompts.md#system-prompts).

1. In the center pane, enter **Generate a playlist of songs about \$1\$1topic\$1\$1. Make sure each song is by artists from \$1\$1location\$1\$1.** in the **Prompt message** text box. 

1. Choose **Save** to save a draft of your prompt.

1. Test your prompt by doing the following:

   1. On right side of the page, choose **<** to open the test pane.

   1. For **Test variable values**, enter the following values for your prompt variables. 
      + **topic**– Enter **castles**.
      + **location**– Enter **Wales**.

   1. Choose **Run** to test your prompt. You should see your prompt, with populated variables, in the **Test** section. Amazon Bedrock in SageMaker Unified Studio displays the response from the model underneath your prompt.

   1. (Optional) Choose **Reset** to clear previously shown test results.

1. (Optional) Compare the prompt with up to 2 variants by doing the following:

   1. Choose **Compare variants**

   1. In **Variant\$11** enter the model and prompt message that you want to use. Also Add the test variable values.

   1. Choose **Run all** to run and compare the results.

   1. (Optional) choose **Add variant\$12** to add another prompt variant to compare. 

   1. Decide which prompt you want to save and choose **Save**. 

   1. Choose **Exit comparison** to finish comparing the prompts.

   1. In the **Exit comparison** dialog box decide whether you want to continue with the original prompt or continue with a variant of the prompt. Choose **Exit**.

1. Continue to make changes to the prompt and variables until you are satisfied with the results. You can choose **Reset** to clear previously shown test results.

1. When you are ready, choose **Create version** to create a version of your prompt. If the button is disabled, wait until Amazon Bedrock in SageMaker Unified Studio completes saving the prompt, which should take up to a minute.

1. Add your prompt to a [flow app](add-prompt-to-prompt-flow-app.md).

# Add an Amazon Bedrock prompt to a flow app
<a name="add-prompt-to-prompt-flow-app"></a>

In this procedure, you add a prompt to an existing [flow app](create-flows-app.md).

**To add a prompt to a flow app**

1. Navigate to the Amazon SageMaker Unified Studio landing page by using the URL from your administrator.

1. Access Amazon SageMaker Unified Studio using your IAM or single sign-on (SSO) credentials. For more information, see [Access Amazon SageMaker Unified Studio](getting-started-access-the-portal.md).

1. If the project that you want to use isn't already open, do the following:

   1. Choose the current project at the top of the page. If a project isn't already open, choose **Select a project**.

   1. Select **Browse all projects**. 

   1. In **Projects** select the project that you want to use.

1. Choose the **Build** menu option at the top of the page.

1. In **MACHINE LEARNING & GENERATIVE AI** choose **My apps**.

1. In **Apps** choose the flow app that you want to add the prompt to.

1. In the **flow builder** pane, select the **Nodes** tab.

1. From the **Orchestration** section, drag a **Prompt** node onto the flow builder canvas.

1. In the the flow builder, select the Prompt node that you just added. 

1. In the **flow builder** pane, choose the **Configure** tab and do the following:

   1. For **Node name**, enter a name for the Prompt node. 

   1. For **Prompt** in the **Prompt details** section, select the prompt that you want to add.

   1. For **Version**, select the version of the prompt that you want to add.

   1. (Optional) In **Select guardrail** select an existing guardrail. For more information, see [Safeguard your Amazon Bedrock app with a guardrail](guardrails.md).

   1. If you want to identify specific data from the upstream node that the prompt should use, change the value in **Expression**. For more information, see [Define inputs with expressions](flows-expressions.md). 

1. The circles on the nodes are connection points. For each variable, draw a line from the circle on the upstream node (such as the **Flow input** node) to the circle for the variable in the **Input** section of the prompt node. 

1. Connect the **Output** of the prompt node to the downstream node that you want the prompt to send its output to. The flow should look similar to the following image:  
![\[Connect an Amazon Bedrock in SageMaker Unified Studio prompt node to a downstream node.\]](http://docs.aws.amazon.com/sagemaker-unified-studio/latest/userguide/images/bedrock/add-prompt-flow-app.png)

1. Choose **Save** to save your changes.

# Modify an Amazon Bedrock prompt
<a name="editing-a-prompt"></a>

You can modify the current draft of a prompt or modify previous versions of a prompt. To modify a prompt, you select the version of the prompt (or current working draft prompt) that you want to modify. You then work on a draft update of the prompt. You can change the configuration for different versions of a prompt. For example, different versions of a prompt can use different Amazon Bedrock in SageMaker Unified Studio models or use different inference parameters.

After testing the draft prompt, you can then save the draft as a new version of the prompt. If you want to use a new version of a prompt in a flow app, update the version of the prompt in the app configuration. For more information, see [Step 3: Add a prompt to your flow app](build-flow.md#build-flow-prompt).

Creating a new prompt version for an already shared prompt doesn't update the users that have access to the prompt version.

 For more information about the changes you can make, see [Create an Amazon Bedrock prompt](creating-a-prompt.md).

**To modify a prompt**

1. Navigate to the Amazon SageMaker Unified Studio landing page by using the URL from your administrator.

1. Access Amazon SageMaker Unified Studio using your IAM or single sign-on (SSO) credentials. For more information, see [Access Amazon SageMaker Unified Studio](getting-started-access-the-portal.md).

1. Choose the **Build** menu at the top of the page.

1. In the **MACHINE LEARNING & GENERATIVE AI** section, choose **My apps**.

1. In the **Select or create a new project to continue** dialog box, select the project that contains the prompt.

1. In the left pane, choose **Asset gallery** and then **My prompts**.

1. In **Prompts**, select the prompt that you want to modify.

1. In **Configs** make changes to the model and inference parameters. 

1. For **Prompt message**, use the text box to make changes to the prompt message.

1. (Optional) Choose **Save** to save the draft of your prompt.

1. In **Test** enter values for the prompt variables and choose run to test your changes. 

1. When you are satisfied with your changes, choose **Create version** to create a new version of your prompt.

# Delete an Amazon Bedrock prompt
<a name="deleting-a-prompt"></a>

You can delete prompts that you have previously created. When you delete a prompt, Amazon Bedrock in SageMaker Unified Studio checks if deleting the prompt affects any apps that use the prompt. After you confirm deletion, Amazon Bedrock in SageMaker Unified Studio deletes the prompt draft and all versions of the prompt that you have created. 

**To delete a prompt**

1. Navigate to the Amazon SageMaker Unified Studio landing page by using the URL from your administrator.

1. Access Amazon SageMaker Unified Studio using your IAM or single sign-on (SSO) credentials. For more information, see [Access Amazon SageMaker Unified Studio](getting-started-access-the-portal.md).

1. Choose the **Build** menu at the top of the page.

1. In the **MACHINE LEARNING & GENERATIVE AI** section, choose **My apps**.

1. In the **Select or create a new project to continue** dialog box, select the project that contains the prompt.

1. In the left pane, choose **Asset gallery** and then **My prompts**.

1. In **Prompts**, choose the delete button for the prompt that you want to delete.

1. In the **Delete** dialog box, check if deleting the prompt affects any of your apps. You can still delete the prompt, but you will need to make changes to the apps that use the prompt.

1. If you are ready to delete the prompt variant, enter **delete** in the text box and then choose **Delete**.

# Share an Amazon Bedrock prompt version
<a name="sharing-a-prompt"></a>

You can share versions of prompts that you have previously created. You can share a prompt version with all members of your Amazon SageMaker Unified Studio domain, or with specific users or groups in your Amazon SageMaker Unified Studio domain.

When you first share a prompt version, you get a share link to the prompt version that you can send to users. If you share the prompt version with all users, Amazon SageMaker Unified Studio grants permission to a user, when they first open the share link. Amazon SageMaker Unified Studio also adds the prompt version to the user's shared assets list. If you share the prompt version with specific users and groups, the prompt version is immediately available in their shared assets list. They can also use the share link to access the prompt. By default, sharing a prompt version is restricted to only those users or groups that you select. 

If you need the share link again after sharing the prompt version, get the share link by choosing to share prompt version again and copying the share link. You can also change the users that you share with the prompt version with.

To see which prompt versions you have shared, Open the project, choose **Asset gallery** and then **My prompts**. Check the **Share status** column for the prompt.

**To share a prompt version**

1. Navigate to the Amazon SageMaker Unified Studio landing page by using the URL from your administrator.

1. Access Amazon SageMaker Unified Studio using your IAM or single sign-on (SSO) credentials. For more information, see [Access Amazon SageMaker Unified Studio](getting-started-access-the-portal.md).

1. Choose the **Build** menu at the top of the page.

1. In the **MACHINE LEARNING & GENERATIVE AI** section, choose **My apps**.

1. In the **Select or create a new project to continue** dialog box, select the project that contains the prompt.

1. In the left pane, choose **Asset gallery** and then **My prompts**.

1. In **Prompts**, select the prompt that you want to share.

1. If you haven't previously created a version of your prompt, choose **Create version** to create a version of your prompt.

1. Choose the menu option, and choose choose **Share prompt version** to open the prompt sharing pane.

1. In **Version to publish**, select the version of the prompt that you want to share

1. Do one of the following:
   + If you want to share the prompt version with all members of your Amazon SageMaker Unified Studio domain, turn on **Grant access with link**.
   + If you want to share the prompt version with specific Amazon SageMaker Unified Studio domain users or groups, do the following in **Share with specific users or groups**:

     1. For **Member type** choose **Individual user** or **Group**, depending on the type of member that you want share the app with.

     1. Search for the users or groups that you want to share the app with by entering the user name or group in the **Search by alias to invite members** text box.

     1. In the drop down list, select the matching user name or group that want to share the app with. 

     1. Choose **Add** to add the user or group.

1. Choose **Share** to share the prompt. 

1. When the success message appears, choose **Copy link** and send the link to the users that you are sharing the prompt version with. If **Grant access with link** is off, the link only works for users that you have explicitly granted access to the prompt. 

# Evaluate the performance of an Amazon Bedrock model
<a name="evaluation"></a>

With Amazon Bedrock in SageMaker Unified Studio, you can use automatic model evaluations to quickly evaluate the performance and effectiveness of Amazon Bedrock foundation models. To evaluate a model you create an evaluation job. Model evaluation jobs support common use cases for large language models (LLMs) such as text generation, text classification, question answering, and text summarization. The results of a model evaluation job allow you to compare model outputs, and then choose the model best suited for your needs. You can view performance metrics, such as the semantic robustness of a model. Automatic evaluations produce calculated scores and metrics that help you assess the effectiveness of a model. 

Amazon Bedrock in SageMaker Unified Studio doesn't support Human-based evaluations. For more information, see [Model evaluation jobs](https://docs.aws.amazon.com/bedrock/latest/userguide/model-evaluation.html) in the *Amazon Bedrock user guide*.

**Important**  
In Amazon Bedrock in SageMaker Unified Studio, you can view the model evaluation jobs in your project. However, the Amazon Bedrock API allows users to list all model evaluation jobs in the AWS account that hosts the project. We don't recommend including sensitive information in model evaluation jobs metadata.   
If you delete a Amazon SageMaker Unified Studio project, or if your admin deletes your domain, your model evaluation jobs are not automatically deleted. If you don't delete your jobs before the project or domain is deleted, you will need to use the Amazon Bedrock console to delete the jobs. Contact your administrator if you don't have access to the Amazon Bedrock in SageMaker Unified Studio console. 

This section shows you how to create and manage model evaluation jobs, and the kinds of performance metrics you can use. This section also describes the available built-in datasets and how to specify your own dataset.

**Topics**
+ [Create a model evaluation job with Amazon Bedrock](model-evaluation-jobs-management-create.md)
+ [Model evaluation task types in Amazon Bedrock](model-evaluation-tasks.md)
+ [Use prompt datasets for model evaluation in Amazon Bedrock](model-evaluation-prompt-datasets.md)
+ [Review a model model evaluation job in Amazon Bedrock](model-evaluation-report.md)

# Create a model evaluation job with Amazon Bedrock
<a name="model-evaluation-jobs-management-create"></a>

When you create a model evaluation job, you specify the model, task type, and prompt dataset that you want to the job to use. You also specify the metrics that you want the job to create.

To create a model evaluation job, you must have access to an Amazon Bedrock model that supports model evaluation. For more information, see [Model support by feature](https://docs.aws.amazon.com/bedrock/latest/userguide/models-features.html) in the *Amazon Bedrock user guide*. If you don't have access to a suitable model, contact your administrator. 

 Model evaluation supports the following task types that assess different aspects of the model's performance:
+ **[General text generation](model-evaluation-tasks-general-text.md)** – the model performs natural language processing and text generation tasks.
+ **[Text summarization](model-evaluation-tasks-text-summary.md)** – the model performs summarizes text based on the prompts you provide.
+ **[Question and answer](model-evaluation-tasks-question-answer.md)** – the model provides answers based on your prompts.
+ **[Text classification](model-evaluation-text-classification.md)** – the model categorizes text into predefined classes based on the input dataset.

To perform a model evaluation for a task type, Amazon Bedrock in SageMaker Unified Studio needs an input dataset that contains prompts. The job uses the dataset for inference during evaluation. You can use a [built in](model-evaluation-prompt-datasets-builtin.md) dataset that Amazon Bedrock in SageMaker Unified Studio suppplies or supply your own [custom](model-evaluation-prompt-datasets-custom.md) prompt dataset. To create a custom prompt dataset, use the information at [custom prompt](model-evaluation-prompt-datasets-custom.md). When you supply your own dataset, Amazon Bedrock in SageMaker Unified Studio uploads the dataset to an Amazon S3 bucket that it manages. You can get the location from the Amazon S3 section of your project's **Data Store**. You can also use a custom dataset that you have previously uploaded to the Data Store. 

You can choose from the following the metrics that you want the model evaluation job to create. 
+ **Toxicity** – The presence of harmful, abusive, or undesirable content generated by the model. 
+ **Accuracy** – The model's ability to generate outputs that are factually correct, coherent, and aligned with the intended task or query. 
+ **Robustness** – The model's ability to maintain consistent and reliable performance in the face of various types of challenges or perturbations.

How the model evaluation job applies the metrics depends on the task type that you choose. For more information, see [Review a model model evaluation job in Amazon Bedrock](model-evaluation-report.md).

You can tag model evaluation jobs for purposes such as tracking costs. Amazon Bedrock in SageMaker Unified Studio automatically prepends tags you add with *ProjectUserTag*. To view the tags that you add, you need to use the tag editor in the AWS Resource Groups console. For more information, see [What is Tag Editor?](https://docs.aws.amazon.com/tag-editor/latest/userguide/gettingstarted.html) in the *AWS Resource Management Documentation*.

You can set the inference parameters for the model evaluation job. You can change *Max tokens*, *temperature*, and *Top P* inference parameters. Models might support other parameters that you can change. For more information, see [Inference request parameters and response fields for foundation models](https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters.html) in the *Amazon Bedrock user guide*.

**To create an automatic model evaluation job**

1. Navigate to the Amazon SageMaker Unified Studio landing page by using the URL from your administrator.

1. Access Amazon SageMaker Unified Studio using your IAM or single sign-on (SSO) credentials. For more information, see [Access Amazon SageMaker Unified Studio](getting-started-access-the-portal.md).

1. If you want to use a new project, do the following:

   1. Choose the current project at the top of the page. If a project isn't already open, choose **Select a project**.

   1. Select **Create project**. 

   1. Follow the instructions at [Create a new project](create-new-project.md). For the **Project profile** in step 1, choose **Generative AI application development**.

1. If the project that you want to use isn't already open, do the following:

   1. Choose the current project at the top of the page. If a project isn't already open, choose **Select a project**.

   1. Select **Browse all projects**. 

   1. In **Projects** select the project that you want to use.

1. At the top of the page, select **Build**. 

1. In the **MACHINE LEARNING & GENERATIVE AI** section, under **AI OPS**, choose **Model evaluations**. 

1. Choose **Create evaluation** to open the **Create evaluation** page and start step 1 (specify details).

1. For **Evaluation job name**, enter a name for the evaluation job. This name is shown in your model evaluation job list. 

1. (Optional) For **Description** enter a description.

1. (Optional) For **Tags** add tags for that you want to attach to the model evaluation job. 

1. Choose **Next** to start step 2 (set up evaluation).

1. In **Model selector**, select a model by selecting the **Model provider** and then the **Model**. 

1. (Optional) To change the inference configuration choose **update** to open the **Inference configurations** pane.

1. In **Task type**, choose the type of task you want the model evaluation job to perform. For information about the available task types, see [Model evaluation task types in Amazon Bedrock](model-evaluation-tasks.md).

1. For the task type, choose which metrics that you want the evaluation job to collect. For information about available metrics, see [Review a model model evaluation job in Amazon Bedrock](model-evaluation-report.md). 

1. For each metric, select the dataset that you want to use in **Choose an evaluation dataset**.
   + To use a [built in](model-evaluation-prompt-datasets-builtin.md) dataset, choose **Built in datasets** and choose the metrics that you want use.
   + To upload a [custom dataset](model-evaluation-prompt-datasets-custom.md), choose **Upload a dataset to S3** and upload the dataset file. 
   + To use an existing custom dataset, choose **Choose a dataset from S3** and select the previously uploaded custom dataset. 

1. Choose **Next** to start step 3 (review and submit).

1. Check that the evaluation job details are correct.

1. Choose **Submit** to start the model evaluation job.

1. Wait until the model evaluation job finishes. The job is complete when its status **Success** on the model evaluations page.

1. Next step: [Review](model-evaluation-report.md) the results of the model evaluation job.

If you decide to stop the model evaluation job, open the model evaluations page, choose the model evaluation job, and choose **Stop**. To delete the evaluation, choose **Stop**.

# Model evaluation task types in Amazon Bedrock
<a name="model-evaluation-tasks"></a>

In a model evaluation job, an evaluation task type is a task you want the model to perform based on information in your prompts. You can choose one task type per model evaluation job.

The following table summarizes available tasks types for automatic model evaluations, built-in datasets, and relevant metrics for each task type.


**Available built-in datasets for automatic model evaluation jobs in Amazon Bedrock**  
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/sagemaker-unified-studio/latest/userguide/model-evaluation-tasks.html)

**Topics**
+ [General text generation for model evaluation in Amazon Bedrock](model-evaluation-tasks-general-text.md)
+ [Text summarization for model evaluation in Amazon Bedrock](model-evaluation-tasks-text-summary.md)
+ [Question and answer for model evaluation in Amazon Bedrock](model-evaluation-tasks-question-answer.md)
+ [Text classification for model evaluation in Amazon Bedrock](model-evaluation-text-classification.md)

# General text generation for model evaluation in Amazon Bedrock
<a name="model-evaluation-tasks-general-text"></a>

General text generation is a task used by applications that include chatbots. The responses generated by a model to general questions are influenced by the correctness, relevance, and bias contained in the text used to train the model.

**Important**  
For general text generation, there is a known system issue that prevents Cohere models from completing the toxicity evaluation successfully.

The following built-in datasets contain prompts that are well-suited for use in general text generation tasks.

**Bias in Open-ended Language Generation Dataset (BOLD)**  
The Bias in Open-ended Language Generation Dataset (BOLD) is a dataset that evaluates fairness in general text generation, focusing on five domains: profession, gender, race, religious ideologies, and political ideologies. It contains 23,679 different text generation prompts.

**RealToxicityPrompts**  
RealToxicityPrompts is a dataset that evaluates toxicity. It attempts to get the model to generate racist, sexist, or otherwise toxic language. This dataset contains 100,000 different text generation prompts.

**T-Rex : A Large Scale Alignment of Natural Language with Knowledge Base Triples (TREX)**  
TREX is dataset consisting of Knowledge Base Triples (KBTs) extracted from Wikipedia. KBTs are a type of data structure used in natural language processing (NLP) and knowledge representation. They consist of a subject, predicate, and object, where the subject and object are linked by a relation. An example of a Knowledge Base Triple (KBT) is "George Washington was the president of the United States". The subject is "George Washington", the predicate is "was the president of", and the object is "the United States".

**WikiText2**  
WikiText2 is a HuggingFace dataset that contains prompts used in general text generation.

The following table summarizes the metrics calculated, and recommended built-in dataset that are available for automatic model evaluation jobs. 


**Available built-in datasets for general text generation in Amazon Bedrock**  
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/sagemaker-unified-studio/latest/userguide/model-evaluation-tasks-general-text.html)

To learn more about how the computed metric for each built-in dataset is calculated, see [Review a model model evaluation job in Amazon Bedrock](model-evaluation-report.md)

# Text summarization for model evaluation in Amazon Bedrock
<a name="model-evaluation-tasks-text-summary"></a>

Text summarization is used for tasks including creating summaries of news, legal documents, academic papers, content previews, and content curation. The ambiguity, coherence, bias, and fluency of the text used to train the model as well as information loss, accuracy, relevance, or context mismatch can influence the quality of responses.

**Important**  
For text summarization, there is a known system issue that prevents Cohere models from completing the toxicity evaluation successfully.

The following built-in dataset is supported for use with the task summarization task type.

**Gigaword**  
The Gigaword dataset consists of news article headlines. This dataset is used in text summarization tasks.

The following table summarizes the metrics calculated, and recommended built-in dataset. 


**Available built-in datasets for text summarization in Amazon Bedrock**  
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/sagemaker-unified-studio/latest/userguide/model-evaluation-tasks-text-summary.html)

To learn more about how the computed metric for each built-in dataset is calculated, see [Review a model model evaluation job in Amazon Bedrock](model-evaluation-report.md)

# Question and answer for model evaluation in Amazon Bedrock
<a name="model-evaluation-tasks-question-answer"></a>

Question and answer is used for tasks including generating automatic help-desk responses, information retrieval, and e-learning. If the text used to train the foundation model contains issues including incomplete or inaccurate data, sarcasm or irony, the quality of responses can deteriorate.

**Important**  
For question and answer, there is a known system issue that prevents Cohere models from completing the toxicity evaluation successfully.

The following built-in datasets are recommended for use with the question andg answer task type.

**BoolQ**  
BoolQ is a dataset consisting of yes/no question and answer pairs. The prompt contains a short passage, and then a question about the passage. This dataset is recommended for use with question and answer task type.

**Natural Questions**  
Natural questions is a dataset consisting of real user questions submitted to Google search.

**TriviaQA**  
TriviaQA is a dataset that contains over 650K question-answer-evidence-triples. This dataset is used in question and answer tasks.

The following table summarizes the metrics calculated, and recommended built-in dataset. 


**Available built-in datasets for the question and answer task type in Amazon Bedrock**  
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/sagemaker-unified-studio/latest/userguide/model-evaluation-tasks-question-answer.html)

To learn more about how the computed metric for each built-in dataset is calculated, see [Review a model model evaluation job in Amazon Bedrock](model-evaluation-report.md)

# Text classification for model evaluation in Amazon Bedrock
<a name="model-evaluation-text-classification"></a>

Text classification is used to categorize text into pre-defined categories. Applications that use text classification include content recommendation, spam detection, language identification and trend analysis on social media. Imbalanced classes, ambiguous data, noisy data, and bias in labeling are some issues that can cause errors in text classification.

**Important**  
For text classification, there is a known system issue that prevents Cohere models from completing the toxicity evaluation successfully.

The following built-in datasets are recommended for use with the text classification task type.

**Women's E-Commerce Clothing Reviews**  
Women's E-Commerce Clothing Reviews is a dataset that contains clothing reviews written by customers. This dataset is used in text classification tasks. 

The following table summarizes the metrics calculated, and recommended built-in datasets. 




**Available built-in datasets in Amazon Bedrock**  
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/sagemaker-unified-studio/latest/userguide/model-evaluation-text-classification.html)

To learn more about how the computed metric for each built-in dataset is calculated, see [Review a model model evaluation job in Amazon Bedrock](model-evaluation-report.md)

# Use prompt datasets for model evaluation in Amazon Bedrock
<a name="model-evaluation-prompt-datasets"></a>

To create a model evaluation job you must specify a prompt dataset the model uses during inference. Amazon Bedrock in SageMaker Unified Studio provides built-in datasets that can be used in automatic model evaluations, or you can bring your own prompt dataset. 

Use the following sections to learn more about available built-in prompt datasets and creating your custom prompt datasets.

To learn more about creating your first model evaluation job in Amazon Bedrock, see [Create a model evaluation job with Amazon Bedrock](model-evaluation-jobs-management-create.md).

**Topics**
+ [Use built-in prompt datasets for automatic model evaluation in Amazon Bedrock](model-evaluation-prompt-datasets-builtin.md)
+ [Use custom prompt dataset for model evaluation in Amazon Bedrock in SageMaker Unified Studio](model-evaluation-prompt-datasets-custom.md)

# Use built-in prompt datasets for automatic model evaluation in Amazon Bedrock
<a name="model-evaluation-prompt-datasets-builtin"></a>

Amazon Bedrock provides multiple built-in prompt datasets that you can use in an automatic model evaluation job. Each built-in dataset is based off an open-source dataset. We have randomly down sampled each open-source dataset to include only 100 prompts.

When you create an automatic model evaluation job and choose a **Task type** Amazon Bedrock provides you with a list of recommended metrics. For each metric, Amazon Bedrock also provides recommended built-in datasets. To learn more about available task types, see [Model evaluation task types in Amazon Bedrock](model-evaluation-tasks.md).

**Bias in Open-ended Language Generation Dataset (BOLD)**  
The Bias in Open-ended Language Generation Dataset (BOLD) is a dataset that evaluates fairness in general text generation, focusing on five domains: profession, gender, race, religious ideologies, and political ideologies. It contains 23,679 different text generation prompts.

**RealToxicityPrompts**  
RealToxicityPrompts is a dataset that evaluates toxicity. It attempts to get the model to generate racist, sexist, or otherwise toxic language. This dataset contains 100,000 different text generation prompts.

**T-Rex : A Large Scale Alignment of Natural Language with Knowledge Base Triples (TREX)**  
TREX is dataset consisting of Knowledge Base Triples (KBTs) extracted from Wikipedia. KBTs are a type of data structure used in natural language processing (NLP) and knowledge representation. They consist of a subject, predicate, and object, where the subject and object are linked by a relation. An example of a Knowledge Base Triple (KBT) is "George Washington was the president of the United States". The subject is "George Washington", the predicate is "was the president of", and the object is "the United States".

**WikiText2**  
WikiText2 is a HuggingFace dataset that contains prompts used in general text generation.

**Gigaword**  
The Gigaword dataset consists of news article headlines. This dataset is used in text summarization tasks.

**BoolQ**  
BoolQ is a dataset consisting of yes/no question and answer pairs. The prompt contains a short passage, and then a question about the passage. This dataset is recommended for use with question and answer task type.

**Natural Questions **  
Natural question is a dataset consisting of real user questions submitted to Google search.

**TriviaQA**  
TriviaQA is a dataset that contains over 650K question-answer-evidence-triples. This dataset is used in question and answer tasks.

**Women's E-Commerce Clothing Reviews**  
Women's E-Commerce Clothing Reviews is a dataset that contains clothing reviews written by customers. This dataset is used in text classification tasks. 

In the following table, you can see the list of available datasets grouped task type. To learn more about how automatic metrics are computed, see [Review a model model evaluation job in Amazon Bedrock](model-evaluation-report.md). 


**Available built-in datasets for automatic model evaluation jobs in Amazon Bedrock**  
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/sagemaker-unified-studio/latest/userguide/model-evaluation-prompt-datasets-builtin.html)

To learn more about the requirements for creating and examples of custom prompt datasets, see [Use custom prompt dataset for model evaluation in Amazon Bedrock in SageMaker Unified Studio](model-evaluation-prompt-datasets-custom.md).

# Use custom prompt dataset for model evaluation in Amazon Bedrock in SageMaker Unified Studio
<a name="model-evaluation-prompt-datasets-custom"></a>

You can use a custom prompt dataset in model evaluation jobs.

In model evaluation jobs you can use a custom prompt dataset for each metric you select in the model evaluation job. Custom datasets use the JSON line format (`.jsonl`), and each line must be a valid JSON object. There can be up to 1000 prompts in your dataset per automatic evaluation job.

You must use the following keys in a custom dataset.
+ `prompt` – required to indicate the input for the following tasks:
  + The prompt that your model should respond to, in general text generation.
  + The question that your model should answer in the question and answer task type.
  + The text that your model should summarize in text summarization task.
  + The text that your model should classify in classification tasks.
+ `referenceResponse` – required to indicate the ground truth response against which your model is evaluated for the following tasks types:
  + The answer for all prompts in question and answer tasks.
  + The answer for all accuracy, and robustness evaluations.
+ `category`– (optional) generates evaluation scores reported for each category. 

As an example, accuracy requires both the question to ask and the answer to check the model response against. In this example, use the key `prompt` with the value contained in the question, and the key `referenceResponse` with the value contained in the answer as follows.

```
{
    "prompt": "Bobigny is the capital of",
    "referenceResponse": "Seine-Saint-Denis",
    "category": "Capitals"
}
```

The previous example is a single line of a JSON line input file that will be sent to your model as an inference request. Model will be invoked for every such record in your JSON line dataset. The following data input example is for a question answer task that uses an optional `category` key for evaluation.

```
{"prompt":"Aurillac is the capital of", "category":"Capitals", "referenceResponse":"Cantal"}
{"prompt":"Bamiyan city is the capital of", "category":"Capitals", "referenceResponse":"Bamiyan Province"}
{"prompt":"Sokhumi is the capital of", "category":"Capitals", "referenceResponse":"Abkhazia"}
```

# Review a model model evaluation job in Amazon Bedrock
<a name="model-evaluation-report"></a>

The results of a model evaluation job are presented in a report, and include key metrics that can help you assess the model performance and effectiveness. In your model evaluation report, you will see an evaluation summary and sections for each of the metrics that you chose for the evaluation job. responses. 

**Topics**
+ [Viewing a model evaluation report](#model-evaluation-report-procedure)
+ [Understanding a model evaluation report](#model-evaluation-report-understanding)

## Viewing a model evaluation report
<a name="model-evaluation-report-procedure"></a>

**To view a model evaluation report**

1. Navigate to the Amazon SageMaker Unified Studio landing page by using the URL from your administrator.

1. Access Amazon SageMaker Unified Studio using your IAM or single sign-on (SSO) credentials. For more information, see [Access Amazon SageMaker Unified Studio](getting-started-access-the-portal.md).

1. If the project that you want to use isn't already open, do the following:

   1. Choose the current project at the top of the page. If a project isn't already open, choose **Select a project**.

   1. Select **Browse all projects**. 

   1. In **Projects** select the project that you want to use.

1. Choose the **Build** menu option at the top of the page.

1. In **MACHINE LEARNING & GENERATIVE AI** choose **My apps**.

1. From the navigation pane, choose **Build** and then **Model evaluations**. 

1. In the **Model evaluation jobs** table choose the name of the model evaluation job you want to review. The model evaluation card opens.

## Understanding a model evaluation report
<a name="model-evaluation-report-understanding"></a>

In the **Evaluation summary** you can see the task type and tasks metrics that the evaluation job calculated.

For each metric, the report contains the dataset, the calculated metric value for the dataset, the total number of prompts in the dataset, and how many of those prompts received. How the metric value is calculated changes based on the task type and the metrics you selected.

In all semantic robustness related metrics, Amazon Bedrock in SageMaker Unified Studio perturbs prompts in the following ways: convert text to all lower cases, keyboard typos, converting numbers to words, random changes to upper case and random addition/deletion of whitespaces.

**How each available metric is calculated when applied to the general text generation task type**
+ **Accuracy**: For this metric, the value is calculated using real world knowledge score (RWK score). RWK score examines the model’s ability to encode factual knowledge about the real world. A high RWK score indicates that your model is being accurate.
+ **Robustness**: For this metric, the value is calculated using semantic robustness. Which is calculated using word error rate. Semantic robustness measures how much the model output changes as a result of minor, semantic preserving perturbations, in the input. Robustness to such perturbations is a desirable property, and thus a low semantic robustness score indicated your model is performing well.

  The perturbation types we will consider are: convert text to all lower cases, keyboard typos, converting numbers to words, random changes to upper case and random addition/deletion of whitespaces. Each prompt in your dataset is perturbed approximately 5 times. Then, each perturbed response is sent for inference, and used to calculate robustness scores automatically.
+ **Toxicity**: For this metric, the value is calculated using toxicity from the detoxify algorithm. A low toxicity value indicates that your selected model is not producing large amounts of toxic content. To learn more about the detoxify algorithm and see how toxicity is calculated, see the [detoxify algorithm](https://github.com/unitaryai/detoxify) on GitHub.

**How each available metric is calculated when applied to the text summarization task type**
+ **Accuracy**: For this metric, the value is calculated using BERT Score. BERT Score is calculated using pre-trained contextual embeddings from BERT models. It matches words in candidate and reference sentences by cosine similarity.
+ **Robustness**: For this metric, the value calculated is a percentage. It calculated by taking (Delta BERTScore / BERTScore) x 100. Delta BERTScore is the difference in BERT Scores between a perturbed prompt and the original prompt in your dataset. Each prompt in your dataset is perturbed approximately 5 times. Then, each perturbed response is sent for inference, and used to calculate robustness scores automatically. A lower score indicates the selected model is more robust.
+ **Toxicity**: For this metric, the value is calculated using toxicity from the detoxify algorithm. A low toxicity value indicates that your selected model is not producing large amounts of toxic content. To learn more about the detoxify algorithm and see how toxicity is calculated, see the [detoxify algorithm](https://github.com/unitaryai/detoxify) on GitHub.

**How each available metric is calculated when applied to the question and answer task type**
+ **Accuracy**: For this metric, the value calculated is F1 score. F1 score is calculated by dividing the precision score (the ratio of correct predictions to all predictions) by the recall score (the ratio of correct predictions to the total number of relevant predictions). The F1 score ranges from 0 to 1, with higher values indicating better performance.
+ **Robustness**: For this metric, the value calculated is a percentage. It is calculated by taking (Delta F1 / F1) x 100. Delta F1 is the difference in F1 Scores between a perturbed prompt and the original prompt in your dataset. Each prompt in your dataset is perturbed approximately 5 times. Then, each perturbed response is sent for inference, and used to calculate robustness scores automatically. A lower score indicates the selected model is more robust.
+ **Toxicity**: For this metric, the value is calculated using toxicity from the detoxify algorithm. A low toxicity value indicates that your selected model is not producing large amounts of toxic content. To learn more about the detoxify algorithm and see how toxicity is calculated, see the [detoxify algorithm](https://github.com/unitaryai/detoxify) on GitHub.

**How each available metric is calculated when applied to the text classification task type**
+ **Accuracy**: For this metric, the value calculated is accuracy. Accuracy is a score that compares the predicted class to its ground truth label. A higher accuracy indicates that your model is correctly classifying text based on the ground truth label provided.
+ **Robustness**: For this metric, the value calculated is a percentage. It is calculated by taking (delta classification accuracy score / classification accuracy score) x 100. Delta classification accuracy score is the difference between the classification accuracy score of the perturbed prompt and the original input prompt. Each prompt in your dataset is perturbed approximately 5 times. Then, each perturbed response is sent for inference, and used to calculate robustness scores automatically. A lower score indicates the selected model is more robust.

In the **Job configuration summary**, you can see the model and the inference parameters that the job used.

# Add a Knowledge Base to your Amazon Bedrock app
<a name="data-sources"></a>

You can use Knowledge Base components to store data from an external data source for use in a [chat agent app](create-chat-app.md) or [flow app](create-flows-app.md). The data source for a Knowledge Base can be a [document](data-source-document.md), such as a PDF file, or content from a [web crawler](data-source-document-web-crawler.md) that gathers content from specific source URLs. When you create a Knowledge Base, you specify an embeddings model to convert the data into numerical vector representations and a vector store for storing and managing your embeddings. Vector stores can be easily indexed for efficient retrieval in a process known as *retrieval augmented generation (RAG)*. RAG enables foundation models to generate more accurate responses by providing relevant context from the vector store.

The data source for a knowledge base can be one of the following:
+ A [document](data-source-document.md), such as a PDF file
+ A [web crawler](data-source-document-web-crawler.md) that gathers content from specific source URLs
+ A data source already in your project, such as an Amazon S3 bucket, or structured data in Amazon Redshift

You can then use the knowledge base in a [chat agent app](create-chat-app.md) and a [flow app](create-flows-app.md). 

You can only access Knowledge Bases that you create within Amazon Bedrock in SageMaker Unified Studio. You can't access Knowledge Bases that you create in the Amazon Bedrock console or AWS SDK.

For more information, see [Build and manage knowledge bases for retrieval and responses](https://docs.aws.amazon.com/bedrock/latest/userguide/knowledge-base-resource.html) in the *Amazon Bedrock User Guide*.

**Topics**
+ [Create an Amazon Bedrock Knowledge Base component](creating-a-knowledge-base-component.md)
+ [Add an Amazon Bedrock Knowledge Base component to a chat agent app](add-kb-component-chat-app.md)
+ [Add a Knowledge Base component to a flow app](add-kb-component-prompt-flow-app.md)
+ [Synchronize an Amazon Bedrock Knowledge Base](kb-sync.md)

# Create an Amazon Bedrock Knowledge Base component
<a name="creating-a-knowledge-base-component"></a>

You can create a knowledge base as a component in an Amazon Bedrock in SageMaker Unified Studio project. You then add the knowledge base to an chat agent app or flow app. Alternatively, you can create a knowledge base when you [design](create-chat-app-with-components.md#chat-app-add-data-source) the app. When you create a knowledge base, you choose a data source, such as a local file or web crawler.

In this section you learn about the various data sources that you can use and how to create a knowledge base component.

**Topics**
+ [Use a Local file as a data source](data-source-document.md)
+ [Use a web crawler as a data source](data-source-document-web-crawler.md)
+ [Use project data as a data source](data-source-project.md)
+ [Understanding security boundaries with structured data sources in an Amazon Bedrock knowledge base](kb-security-boundaries.md)
+ [Chunking and parsing with knowledge bases](kb-chunking-parsing.md)

# Use a Local file as a data source
<a name="data-source-document"></a>

You can add a local file (document) as a data source. A document contains information that you want the model to use when generating a response. By using a document as a data source for a knowledge base, your app users can chat with a document. For example, they can use a document to answers questions, make an analysis, create a summary, itemize fields in a numbered list, or rewrite content. 

You can use a document as a data source in a chat agent app and a flow app.

The document file must be in PDF, MD, TXT, DOC, DOCX, HTML, CSV, XLS or XLSX format. The maximum file size is 50MB. You can upload up to 50 documents to a knowledge base. 

**To create a Knowledge Base with a local file**

1. Navigate to the Amazon SageMaker Unified Studio landing page by using the URL from your administrator.

1. Access Amazon SageMaker Unified Studio using your IAM or single sign-on (SSO) credentials. For more information, see [Access Amazon SageMaker Unified Studio](getting-started-access-the-portal.md).

1. Choose the **Build** menu at the top of the page.

1. In the **MACHINE LEARNING & GENERATIVE AI** section, choose **My apps**.

1. In the **Select or create a new project to continue** dialog box, select the project that you want to use.

1. In the left pane, choose **Asset gallery**.

1. Choose **My components**.

1. In the **Components** section, choose **Create component** and then **Knowledge Base**. The **Create Knowledge Base** pane is shown.

1. For **Name**, enter a name for the Knowledge Base.

1. For **Description**, enter a description for the Knowledge Base.

1. In **Select data source type**, Select **Local file**:

1. Choose **Click to upload** and upload the document that you want the Knowledge Base to use. Alternatively, add your source documents by dragging and dropping the document from your computer.

1. For **parsing **Choose either **default** parsing or choose **parsing with foundation model**.

1. If you choose **parsing with foundation model**, do the following: 

   1. For **Choose a foundation model for parsing** select your preferred foundation model. You can only choose models that your administrator has enabled for parsing. If you don't see a suitable model, contact your administrator. 

   1. (Optional) Overwrite the **Instructions for the parser** to suit your specific needs.

    For more information, see [Chunking and parsing with knowledge bases](kb-chunking-parsing.md).

1. (Optional) For **Chunking strategy** Choose a chunking strategy for your knowledge base. For more information, see [Chunking and parsing with knowledge bases](kb-chunking-parsing.md).

1. (Optional) For **Embeddings model**, choose a model for converting your data into vector embeddings, or use the default model.

1. Choose **Create** to create the Knowledge Base.

1. Use the Knowledge Base in an app, by doing one of the following:
   + If your app is a chat agent app, do [Add an Amazon Bedrock Knowledge Base component to a chat agent app](add-kb-component-chat-app.md).
   + If your app is a flow app, do [Add a Knowledge Base component to a flow app](add-kb-component-prompt-flow-app.md).

# Use a web crawler as a data source
<a name="data-source-document-web-crawler"></a>

The Amazon Bedrock in SageMaker Unified Studio provided Web Crawler connects to and crawls URLs you have selected for use in your Amazon Bedrock knowledge base. You can crawl website pages in accordance with your set scope or limits for your selected URLs. 

The web brawler connects to and crawls HTML pages starting from the seed URL, traversing all child links under the same top primary domain and path. If any of the HTML pages reference supported documents, the Web Crawler will fetch these documents, regardless of if they are within the same top primary domain. 

The web crawler lets you:
+ Select multiple URLs to crawl
+ Respect standard `robots.txt` directives like 'Allow' and 'Disallow'
+ Limit the scope of the URLs to crawl and optionally exclude URLs that match a filter pattern
+ Limit the rate of crawling URLs

There are limits to how many web page content items and MB per content item that Amazon Bedrock in SageMaker Unified Studio can crawl. See [Quotas for knowledge bases](https://docs.aws.amazon.com/bedrock/latest/userguide/quotas.html). In the AWS account and AWS Region that hosts your Amazon SageMaker Unified Studio domain, you can have a maximum of 5 crawler jobs running at a time. 

**Topics**
+ [Web crawler behavior](#data-source-document-web-crawler-behavior)
+ [Create a knowledge base with a web crawler](#data-source-document-web-crawler-procedure)

## Web crawler behavior
<a name="data-source-document-web-crawler-behavior"></a>

You can modify the crawling behavior by changing the following configuration changes:

### Source URLs
<a name="ds-source-urls"></a>

You specify the source URLs that you want the Knowledge Base to crawl. Before you add a source URL, check the following.
+ Check that you are authorized to crawl your source URLs.
+ Check the path to robots.txt corresponding to your source URLs doesn't block the URLs from being crawled. The web crawler adheres to the standards of robots.txt: `disallow` by default if robots.txt is not found for the website. The web crawler respects robots.txt in accordance with the [RFC 9309](https://www.rfc-editor.org/rfc/rfc9309.html).
+ Check if your source URL pages are JavaScript dynamically generated, as crawling dynamically generated content is currently not supported. You can check this by entering this in your browser: *view-source:https://examplesite.com/site/*. If the `body` element contains only a `div` element and few or no `a href` elements, then the page is likely generated dynamically. You can disable JavaScript in your browser, reload the web page, and observe whether content is rendered properly and contains links to your web pages of interest.

**Important**  
When selecting websites to crawl, you must adhere to the [Amazon Acceptable Use Policy](https://aws.amazon.com/aup/) and all other Amazon terms. Remember that you must only use the web crawler to index your own web pages, or web pages that you have authorization to crawl.

Make sure you are not crawling potentially excessive web pages. We recommend that you don't crawl large websites, such as wikipedia.org, without filters or scope limits. Crawling large websites will take a very long time to crawl.

[Supported file types](https://docs.aws.amazon.com/bedrock/latest/userguide/knowledge-base-ds.html) are crawled regardless of scope and if there's no exclusion pattern for the file type.

### Website domain range for crawling URLs
<a name="ds-sync-scope"></a>

You can limit the scope of the URLs to crawl based on each page URL's specific relationship to the seed URLs. For faster crawls, you can limit URLs to those with the same host and initial URL path of the seed URL. For more broader crawls, you can choose to crawl URLs with the same host or within any subdomain of the seed URL.

You can choose from the following options.
+ Default: Limit crawling to web pages that belong to the same host and with the same initial URL path. For example, with a seed URL of "https://aws.amazon.com/bedrock/" then only this path and web pages that extend from this path will be crawled, like "https://aws.amazon.com/bedrock/agents/". Sibling URLs like "https://aws.amazon.com/ec2/" are not crawled, for example.
+ Host only: Limit crawling to web pages that belong to the same host. For example, with a seed URL of "https://aws.amazon.com/bedrock/", then web pages with "https://aws.amazon.com" will also be crawled, like "https://aws.amazon.com/ec2".
+ Subdomains: Include crawling of any web page that has the same primary domain as the seed URL. For example, with a seed URL of "https://aws.amazon.com/bedrock/" then any web page that contains "amazon.com" (subdomain) will be crawled, like "https://www.amazon.com".

**Note**  
Make sure you are not crawling potentially excessive web pages. It's not recommended to crawl large websites, such as wikipedia.org, without filters or scope limits. Crawling large websites will take a very long time to crawl.  
[Supported file types](https://docs.aws.amazon.com/bedrock/latest/userguide/knowledge-base-ds.html) are crawled regardless of scope and if there's no exclusion pattern for the file type.

### Use a URL regex filter to include or exclude URLs
<a name="ds-inclusion-exclusion"></a>

You can include or exclude certain URLs in accordance with your scope. [Supported file types](https://docs.aws.amazon.com/bedrock/latest/userguide/knowledge-base-ds.html) are crawled regardless of scope and if there's no exclusion pattern for the file type. If you specify an inclusion and exclusion filter and both match a URL, the exclusion filter takes precedence and the web content isn’t crawled.

**Important**  
Problematic regular expression pattern filters that lead to [catastrophic backtracking](https://docs.aws.amazon.com/codeguru/detector-library/python/catastrophic-backtracking-regex/) and look ahead are rejected.

An example of a regular expression filter pattern to exclude URLs that end with ".pdf" or PDF web page attachments: `.*\.pdf$`

### Throttle crawling speed
<a name="ds-throttle-crawling"></a>

You can set the number of URLs that Amazon Bedrock in SageMaker Unified Studio can crawl per minute (1 - 300 URLS per host per minute). Higher values decrease synchronization time but increase the load on the host.

### Incremental syncing
<a name="ds-incremental-sync"></a>

Each time the the web crawler runs, it retrieves content for all URLs that are reachable from the source URLs and which match the scope and filters. For incremental syncs after the first sync of all content, Amazon Bedrock will update your knowledge base with new and modified content, and will remove old content that is no longer present. Occasionally, the crawler may not be able to tell if content was removed from the website; and in this case it will err on the side of preserving old content in your knowledge base.

To sync your data source with your knowledge base, see [Synchronize an Amazon Bedrock Knowledge Base](kb-sync.md).

## Create a knowledge base with a web crawler
<a name="data-source-document-web-crawler-procedure"></a>

**To create a Knowledge Base with a web crawler**

1. Navigate to the Amazon SageMaker Unified Studio landing page by using the URL from your administrator.

1. Access Amazon SageMaker Unified Studio using your IAM or single sign-on (SSO) credentials. For more information, see [Access Amazon SageMaker Unified Studio](getting-started-access-the-portal.md).

1. Choose the **Build** menu at the top of the page.

1. In the **MACHINE LEARNING & GENERATIVE AI** section, choose **My apps**.

1. In the **Select or create a new project to continue** dialog box, select the project that you want to use.

1. In the left pane, choose **Asset gallery**.

1. Choose **My components**.

1. In the **Components** section, choose **Create component** and then **Knowledge Base**. The **Create Knowledge Base** pane is shown.

1. For **Name**, enter a name for the Knowledge Base.

1. For **Description**, enter a description for the Knowledge Base.

1. In **Select data source type**, do one of the following:
   + Use a document as a data source by doing the following:

     1. Select **Local file**. 

     1. Choose **Click to upload** and upload the document that you want the Knowledge Base to use. Alternatively, add your source documents by dragging and dropping the document from your computer.

     For more information, see [Use a Local file as a data source](data-source-document.md).
   + Use a web crawler as a data source by doing the following:

     1. Select **Web crawler**.

     1. Provide the **Source URLs** of the URLs you want to crawl. You can add up to 9 additional URLs by selecting **Add Source URLs**. By providing a source URL, you are confirming that you are authorized to crawl its domain.

     1. (Optional) Choose **Edit advanced web crawler configs** to make the following optional configuration changes:
        + **Website domain range**. Set the domain that you want the Knowledge Base to crawl. For more information, see [Website domain range for crawling URLs](#ds-sync-scope).
        + **Maximum throttling of crawling speed**. Set the speed at which the Knowledge Base crawls through the source URLs. For more information, see [Throttle crawling speed](#ds-throttle-crawling).
        + **URL regex filter**. Set regex filters for including (**Include patterns**) or excluding **Exclude patterns** URLS from the web crawl. For more information, see [Use a URL regex filter to include or exclude URLs](#ds-inclusion-exclusion). 
        + Choose **Back** to leave the web crawler configuration pane.

1. For **parsing **Choose either **default** parsing or choose **parsing with foundation model**.

1. If you choose **parsing with foundation model**, do the following: 

   1. For **Choose a foundation model for parsing** select your preferred foundation model. You can only choose models that your administrator has enabled for parsing. If you don't see a suitable model, contact your administrator. 

   1. (Optional) Overwrite the **Instructions for the parser** to suit your specific needs.

1. (Optional) For **Embeddings model**, choose a model for converting your data into vector embeddings, or use the default model.

1. Choose **Create** to create the Knowledge Base.

1. Use the Knowledge Base in an app, by doing one of the following:
   + If your app is a chat agent app, do [Add an Amazon Bedrock Knowledge Base component to a chat agent app](add-kb-component-chat-app.md).
   + If your app is a flow app, do [Add a Knowledge Base component to a flow app](add-kb-component-prompt-flow-app.md).

# Use project data as a data source
<a name="data-source-project"></a>

You can configure an Amazon Bedrock knowledge base to use data sources that are already configured for your project.

**Topics**
+ [Project data sources](#data-source-project-data-sources)
+ [Create a knowledge base with a project data source](#data-source-project-procedure)

## Project data sources
<a name="data-source-project-data-sources"></a>

You can include the following data sources from your project:

### Amazon S3 bucket
<a name="data-source-project-s3"></a>

[Amazon S3](https://docs.aws.amazon.com/s3/) is an object storage service that stores data as objects within buckets. You can use files in your project's bucket as a data source for a knowledge base.

### Amazon Redshift
<a name="data-source-project-redshift"></a>

[Amazon Redshift](https://docs.aws.amazon.com/redshift/) is a serverless data warehouse service that automatically provisions and scales data warehouse capacity to deliver high performance for demanding and unpredictable workloads without the need to manage infrastructure.

You can include all data tables from an Amazon Redshift database or select up to 50 data tables from the available schemas. After selecting the tables, you can select the columms that you want include. You can also preview data from the database, based on the selected columns.

### lakehouse architecture
<a name="data-source-project-lakehouse"></a>

 [lakehouse architecture](https://docs.aws.amazon.com/sagemaker-lakehouse-architecture/latest/userguide/what-is-smlh.html) unifies your data across Amazon S3 data lakes and Amazon Redshift data warehouses.

## Create a knowledge base with a project data source
<a name="data-source-project-procedure"></a>

The following procedure shows how to create a knowledge base with an Amazon S3 bucket, an Amazon Redshift data warehouse, or with lakehouse architecture. 

**To create a knowledge base with a project data source**

1. Navigate to the Amazon SageMaker Unified Studio landing page by using the URL from your administrator.

1. Access Amazon SageMaker Unified Studio using your IAM or single sign-on (SSO) credentials. For more information, see [Access Amazon SageMaker Unified Studio](getting-started-access-the-portal.md).

1. Choose the **Build** menu at the top of the page.

1. In the **MACHINE LEARNING & GENERATIVE AI** section, choose **My apps**.

1. In the **Select or create a new project to continue** dialog box, select the project that you want to use.

1. In the left pane, choose **Asset gallery**.

1. Choose **My components**.

1. In the **Components** section, choose **Create component** and then **Knowledge Base**. The **Create Knowledge Base** pane is shown.

1. For **Name**, enter a name for the Knowledge Base.

1. For **Description**, enter a description for the Knowledge Base.

1. For **Select data source type**, select **Project data sources**.

1. In **Select data source**, select an existing data source (**S3**, **Redshift**, or **Lakehouse**). Alternatively choose to add a new connection. 
   + **S3** – Do the following: 

     1. For **S3 URI** enter the the Amazon S3 Uniform Resource Identifier (URI) of the file or folder that you want to use. Alternatively, choose **Browse** to browse the bucket and choose file or folder.

     1. Choose **Save** to save your changes.
   + **Redshift (Lakehouse)** – Do the following:

     1. For **Select a database** select the database that you want to use.

     1. Choose **Update data tables and columns** to choose the tables and columns that you want to use. To preview the data from the selections you made, you choose **Data**.

     1. Choose **Save** to save your changes.
   + **Lakehouse** – Do the following:

     1. For **Select catalog** select the catalog that you want to use.

     1. For **Select a database** select the database that you want to use.

     1. Choose **Update data tables and columns** to choose the tables and columns that you want to use. To preview the data from the selections you made, you choose **Data**.

     1. Choose **Save** to save your changes.
   + (Optional) For Amazon Redshift and lakehouse architecture data sources you can make the following configuration changes:
     + **Maximum query time** ‐ Limit the time that a query can take by setting a maximum query time, in seconds. 
     + **Descriptions** ‐ Add descriptions and annotations to the names of tables and columns to improve the accuracy of responses from a chat agent app.
     + **Curated queries** ‐ Use curated queries that help guide the agent to create better responses. A curated query is an example question along with the matching SQL query for the question.

1. Choose **Create** to create the Knowledge Base.

1. Use the Knowledge Base in an app, by doing one of the following:
   + If your app is a chat agent app, do [Add an Amazon Bedrock Knowledge Base component to a chat agent app](add-kb-component-chat-app.md).
   + If your app is a flow app, do [Add a Knowledge Base component to a flow app](add-kb-component-prompt-flow-app.md).

# Understanding security boundaries with structured data sources in an Amazon Bedrock knowledge base
<a name="kb-security-boundaries"></a>

Use the following information to understand how security boundaries affect structured data sources in an Amazon Bedrock knowledge base.

**Topics**
+ [Accessing structured data in an Amazon Bedrock knowledge base](#kb-data-access)
+ [Database and table selection as query guidelines](#kb-query-guidelines)
+ [Reliable security boundaries](#kb-reliable-boundaries)
+ [Best practices for sensitive data](#kb-best-practices)

## Accessing structured data in an Amazon Bedrock knowledge base
<a name="kb-data-access"></a>

When you create an Amazon Bedrock knowledge base with a structured data source such as Amazon Redshift, the knowledge base operates with the same permissions as your project user role. This means the knowledge base can potentially access any data that your project role has permission to access. This includes all databases accessible to your project and tables within those databases (both owned by your project and subscribed from other projects through the Business Data Catalog).

## Database and table selection as query guidelines
<a name="kb-query-guidelines"></a>

Configure your knowledge base by selecting a database and specifying which tables and columns to use. Customize your selection by including or excluding tables and columns according to your requirements. These selections help the knowledge base generate more accurate SQL queries by:
+ Focusing the model on relevant data sources
+ Reducing unnecessary references to irrelevant tables or columns
+ Helping prioritize which data should be considered when answering queries

However, due to the nature of large language model based SQL generation:
+ These selections are treated as recommendations rather than strict security boundaries.
+ The knowledge base may occasionally generate queries that reference databases, tables, or columns outside your specified selections.
+ Actual query execution is still governed by your project's permissions.

## Reliable security boundaries
<a name="kb-reliable-boundaries"></a>

The guaranteed security boundary is at the project level. A knowledge base can never access data from another project unless that data has been explicitly shared with your project. All data access is subject to authentication and authorization through AWS Identity and Access Management and Amazon DataZone project permissions.

## Best practices for sensitive data
<a name="kb-best-practices"></a>

If your project contains both sensitive and non-sensitive data, and you want to ensure the knowledge base only accesses specific non-sensitive data, consider these approaches:

### Create a Dedicated *knowledge base-safe* project
<a name="kb-dedicated-project"></a>
+ Create a separate project specifically for knowledge base usage
+ Use the Business Data Catalog to publish only non-sensitive tables from source projects
+ Have your knowledge base-safe project subscribe only to the tables intended for knowledge base access
+ Build knowledge bases exclusively in this controlled environment

### Implement guardrails in your chat agent app
<a name="kb-guardrails"></a>
+ Deploy guardrails to detect and block prompts that attempt to manipulate the knowledge base.
+ Configure content filtering to prevent SQL injection patterns in prompts.
+ Set up rejection criteria for prompts that try to bypass configured constraints.

For information about guardrails, see [Safeguard your Amazon Bedrock app with a guardrail](guardrails.md).

# Chunking and parsing with knowledge bases
<a name="kb-chunking-parsing"></a>

Chunking and parsing are preprocessing techniques used to prepare and organize textual data for efficient storage, retrieval, and utilization by a model. You use chunking and parsing with the following data sources:
+ [local file](data-source-document.md) 
+ [Amazon S3 bucket](data-source-project.md#data-source-project-s3)
+ [Web crawler](data-source-document-web-crawler.md)

**Topics**
+ [Chunking](#kb-chunking)
+ [Parsing](#kb-parsing)

## Chunking
<a name="kb-chunking"></a>

When ingesting your data, Amazon Bedrock first splits your documents or content into manageable chunks for efficient data retrieval. The chunks are then converted to embeddings and written to a vector index (vector representation of the data), while maintaining a mapping to the original document. The vector embeddings allow the texts to be quantitatively compared.

Amazon Bedrock supports different approaches to [chunking](https://docs.aws.amazon.com/bedrock/latest/userguide/kb-chunking.html). Amazon Bedrock in SageMaker Unified Studio supports *default chunking* which splits content into text chunks of approximately 300 tokens. The chunking process honors sentence boundaries, ensuring that complete sentences are preserved within each chunk.

You can set the maximum number of source chunks to from the vector store. For more information, see [Add an Amazon Bedrock Knowledge Base component to a chat agent app](add-kb-component-chat-app.md).

## Parsing
<a name="kb-parsing"></a>

Parsing involves analyzing the structure of information to understand its components and their relationships. With Amazon Bedrock in SageMaker Unified Studio, you can use two types of parser. 
+ Default parsing – Only parses text in your documents. This parser doesn't incur any usage charges.
+ Foundation model parsing – Processes multimodal data, including both text and images, using a foundation model. This parser provides you the option to customize the prompt used for data extraction. The cost of this parser depends on the number of tokens processed by the foundation model. For a list of models that support parsing of Amazon Bedrock knowledge base data, see [Supported models and Regions for parsing](https://docs.aws.amazon.com/bedrock/latest/userguide/knowledge-base-supported.html#knowledge-base-supported-parsing).

  There are additional costs to using foundation model parsing. This is due to its use of a foundation model. The cost depends on the amount of data you have. See [Amazon Bedrock pricing](https://aws.amazon.com/bedrock/pricing/) for more information on the cost of foundation models.

  Amazon Bedrock in SageMaker Unified Studio only supports foundation model parsing with PDF format files. If your files aren't in PDF format, you must convert them to PDF format before you can apply foundation model parsing.

There are limits for the types of files and total data that can be parsed using parsing. For information on the file types for parsing, see [Document formats](https://docs.aws.amazon.com/bedrock/latest/userguide/knowledge-base-ds.html#kb-ds-supported-doc-formats-limits). For information on the total data that can be parsed using foundation model parsing, see [Quotas](https://docs.aws.amazon.com/bedrock/latest/userguide/quotas.html).

For more information, see [How content chunking and parsing works for knowledge bases](bedrock/latest/userguide/kb-chunking-parsing.html).

To create a Knowledge Base that uses an embeddings model, vector store, and parsing, see [Create an Amazon Bedrock Knowledge Base component](creating-a-knowledge-base-component.md).

You can create a Knowledge base as a component in an Amazon Bedrock in SageMaker Unified Studio project. If you are creating an app, you can also create a Knowledge Base when you configure the app. When you create a Knowledge Base, you choose your data source, an embeddings model for transforming your data into vectors, and a vector store to store and manage the vectors. You can also specify how the Knowledge Base should preprocess data from the data source, either through chunking or parsing. The following procedure demonstrates how to create a Knowledge Base in Amazon Bedrock in SageMaker Unified Studio.

**To create a Knowledge Base**

1. Navigate to the Amazon SageMaker Unified Studio landing page by using the URL from your administrator.

1. Access Amazon SageMaker Unified Studio using your IAM or single sign-on (SSO) credentials. For more information, see [Access Amazon SageMaker Unified Studio](getting-started-access-the-portal.md).

1. Choose the **Build** menu at the top of the page.

1. In the **MACHINE LEARNING & GENERATIVE AI** section, choose **My apps**.

1. In the **Select or create a new project to continue** dialog box, select the project that you want to use.

1. In the left pane, choose **Asset gallery**.

1. Choose **My components**.

1. In the **Components** section, choose **Create component** and then **Knowledge Base**. The **Create Knowledge Base** pane is shown.

1. For **Name**, enter a name for the Knowledge Base.

1. For **Description**, enter a description for the Knowledge Base.

1. In **Add data sources**, do one of the following:
   + Use a document as a data source by doing the following:

     1. Choose **Local file**. 

     1. Choose **Click to upload** and upload the document that you want the Knowledge Base to use. Alternatively, add your source documents by dragging and dropping the document from your computer.

     For more information, see [Use a Local file as a data source](data-source-document.md).
   + Use a web crawler as a data source by doing the following:

     1. Choose **Web crawler**.

     1. Provide the **Source URLs** of the URLs you want to crawl. You can add up to 9 additional URLs by selecting **Add Source URLs**. By providing a source URL, you are confirming that you are authorized to crawl its domain.

     1. (Optional) Choose **Specify web crawler configs** to make the following optional configuration changes:
        + **Website domain range**. Set the domain that you want the Knowledge Base to crawl. For more information, see [Website domain range for crawling URLs](data-source-document-web-crawler.md#ds-sync-scope).
        + **Maximum throttling of crawling speed**. Set the speed at which the Knowledge Base crawls through the source URLs. For more information, see [Throttle crawling speed](data-source-document-web-crawler.md#ds-throttle-crawling).
        + **URL regex filter**. Set regex filters for including (**Include patterns**) or excluding **Exclude patterns** URLS from the web crawl. For more information, see [Use a URL regex filter to include or exclude URLs](data-source-document-web-crawler.md#ds-inclusion-exclusion). 
        + Choose **Back** to leave the web crawler configuration pane.

1. In **Configurations**, under **Data storage and processing**, do the following:

   1. For **Embeddings model**, select a foundation model from the drop down to use for transforming your data into vector embeddings.

   1. For **Embedding type** and **Vector dimensions**, select an option from the dropdown to optimize accuracy, cost, and latency. Your options for embedding types and vector dimensions may be limited depending on the embeddings model that you chose.
**Note**  
Amazon OpenSearch Serverless is the only vector store that supports binary vector embeddings. Floating-point vector embeddings are supported by all available vector stores.

   1. For **Vector store** choose from one of the following options:
      + **Vector engine for Amazon OpenSearch Serverless** ‐ Provides contextually relevant responses across billions of vectors in milliseconds. Supports searches combined with text-based keywords for hybrid requests.
      + **Amazon S3 Vectors** ‐ Optimizes cost-effectiveness, durability, and latency for storage of large, long-term vector data sets. Amazon S3 Vectors does not support web crawler data sources. Supports metadata for enhanced search and filtering capabilities.
**Note**  
Amazon S3 Vectors for Amazon Bedrock in SageMaker Unified Studio is available in all AWS Regions where both Amazon Bedrock and Amazon S3 Vectors are available. For information about regional availability of Amazon S3 Vectors, see [Amazon S3 Vectors](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-vectors-regions-quotas.html) in the *Amazon S3 User Guide*.
      + **Amazon Neptune Analytics (GraphRAG)** ‐ Provides high-performance graph analytics and graph-based Retrieval Augmented Generation (GraphRAG) solutions. You must have access to Claude 3 Haiku in order to use this vector store. Contact your administrator if you do not have the necessary permissions.

      Once you select an option for your vector store, Amazon Bedrock in SageMaker Unified Studio will create the vector store on your behalf.

   1. For **Chunking strategy**, choose either **Default**, **Fixed sized**, **Hierarchical**, **Semantic**, or **None**. These options represent different methods for breaking down data into smaller segments before embedding.

   1. For **Parsing strategy**, choose either **Bedrock default parser** or **Foundation model as a parser**. If you choose **Foundation model as a parser**, do the following:

      1. For **Choose a foundation model for parsing** select your preferred foundation model. You can only choose models that your administrator has enabled for parsing. If you don't see a suitable model, contact your administrator. 

      1. (Optional) Overwrite the **Instructions for the parser** to suit your specific needs.

1. Choose **Create** to create the Knowledge Base.

1. Use the Knowledge Base in an app, by doing one of the following:
   + If your app is a chat agent app, do [Add an Amazon Bedrock Knowledge Base component to a chat agent app](add-kb-component-chat-app.md).
   + If your app is a flow app, do [Add a Knowledge Base component to a flow app](add-kb-component-prompt-flow-app.md).

# Add an Amazon Bedrock Knowledge Base component to a chat agent app
<a name="add-kb-component-chat-app"></a>

In this procedure, you add a Knowledge Base component to an existing [chat agent app](create-chat-app.md).

After adding a Knowledge Base component, you can make the following configuration changes.

**Search type**  
You can select a strategy for searching data sources in your knowledge base. Default search chooses the best option between hybrid search and semantic search for your vector store. You can override the the default search type and choose to use a hybrid search (semantic and text) or semantic search. Hybrid search combines relevancy scores from semantic and text search to provide greater accuracy. Semantic search Uses vector embeddings to deliver relevant results. For more information, see [Amazon Bedrock Knowledge Bases now supports hybrid search](https://aws.amazon.com/blogs/machine-learning/amazon-bedrock-knowledge-bases-now-supports-hybrid-search/).

**Maximum number of source chunks**  
When you query a knowledge base, the model returns up to five results in the response by default. Each result corresponds to a source chunk. You can edit the maximum number of retrieved results to return from the vector store. For more information, see [Chunking](kb-chunking-parsing.md#kb-chunking).

**To add a Knowledge Base component to a chat agent app**

1. Navigate to the Amazon SageMaker Unified Studio landing page by using the URL from your administrator.

1. Access Amazon SageMaker Unified Studio using your IAM or single sign-on (SSO) credentials. For more information, see [Access Amazon SageMaker Unified Studio](getting-started-access-the-portal.md).

1. If the project that you want to use isn't already open, do the following:

   1. Choose the current project at the top of the page. If a project isn't already open, choose **Select a project**.

   1. Select **Browse all projects**. 

   1. In **Projects** select the project that you want to use.

1. Choose the **Build** menu option at the top of the page.

1. In **MACHINE LEARNING & GENERATIVE AI** choose **My apps**.

1. In **Apps** choose the chat agent app that you want to add the knowledge base component to.

1. In the **Configs** pane, choose **Data**.

1. Select **Use Knowledge Base**.

1. For **Select Knowledge Base**, select the Knowledge Base component that you want to use. To create a Knowledge Base component, see [Create an Amazon Bedrock Knowledge Base component](creating-a-knowledge-base-component.md).

1. (Optional) Choose **Edit advanced search configs** to set advanced search configurations. 

   1. In **Search type**, turn on **Overide default search** to choose a different search type. You can choose from **Hybrid search** (Combines relevancy scores from semantic and text search to provide greater accuracy) or **Semantic search** (Uses vector embeddings to deliver relevant results).

   1. (Optional) In **Maximum number of source chunks**, choose the maximum number of source chunks to use. 

1. Choose **Save** to save your changes.

# Add a Knowledge Base component to a flow app
<a name="add-kb-component-prompt-flow-app"></a>

In this procedure, you add a Knowledge Base component to an existing [flow app](create-flows-app.md).

1. Navigate to the Amazon SageMaker Unified Studio landing page by using the URL from your administrator.

1. Access Amazon SageMaker Unified Studio using your IAM or single sign-on (SSO) credentials. For more information, see [Access Amazon SageMaker Unified Studio](getting-started-access-the-portal.md).

1. If the project that you want to use isn't already open, do the following:

   1. Choose the current project at the top of the page. If a project isn't already open, choose **Select a project**.

   1. Select **Browse all projects**. 

   1. In **Projects** select the project that you want to use.

1. Choose the **Build** menu option at the top of the page.

1. In **MACHINE LEARNING & GENERATIVE AI** choose **My apps**.

1. In **Apps** choose the flow app that you want to add the knowledge base component to.

1. In the **Flow app builder** pane, select the **Nodes** tab.

1. From the **Data** section, drag a **Knowledge Base** node onto the flow builder canvas.

1. The circles on the nodes are connection points. Draw a line from the circle on the upstream node (such as the **Flow input** node) to the circle on the **Input** section of the Knowledge Base node that you just added. 

1. Connect the **Output** of the Knowledge Base node to the downstream node that you want the Knowledge Base to send its output to. The flow should look similar to the following image:  
![\[Connect an Amazon Bedrock in SageMaker Unified Studio Knowledge Base node to a downstream node.\]](http://docs.aws.amazon.com/sagemaker-unified-studio/latest/userguide/images/bedrock/create-flow-in-kb-out.png)

1. In the the flow builder, select the Knowledge Base node that you just added. 

1. In the **flow builder** pane, choose the **Configure** tab and do the following:

   1. For **Node name**, enter a name for the Knowledge Base node. 

   1. For **Select Knowledge Base** in the **Knowledge Base Details** section, select the Knowledge Base that you just created.

   1. For **Select response generation model**, select the model that you want the Knowledge Base to generate responses with.

   1. (Optional) In **Select guardrail** select an existing guardrail. For more information, see [Safeguard your Amazon Bedrock app with a guardrail](guardrails.md).

1. Choose **Save** to save your changes.

# Synchronize an Amazon Bedrock Knowledge Base
<a name="kb-sync"></a>

After you create a Knowledge Base data source, you synchronize your data so that the data can be queried. Synchronization converts the raw data in your data source into vector embeddings, based on the vector embeddings model and configurations you specified when you [Created](creating-a-knowledge-base-component.md) the Knowledge Base.

If the data source is a web crawler, synchronization time can vary from minutes to hours, depending on the URLs you define.

**To synchronize a Knowledge Base**

1. Navigate to the Amazon SageMaker Unified Studio landing page by using the URL from your administrator.

1. Access Amazon SageMaker Unified Studio using your IAM or single sign-on (SSO) credentials. For more information, see [Access Amazon SageMaker Unified Studio](getting-started-access-the-portal.md).

1. Choose the **Build** menu at the top of the page.

1. In the **MACHINE LEARNING & GENERATIVE AI** section, choose **My apps**.

1. In the **Select or create a new project to continue** dialog box, select the project that you want to use.

1. In the left pane, choose **Asset gallery**.

1. In **Asset gallery**, choose **My components**.

1. Find the Knowledge Base that you want to synchronize, and choose the menu option and select **Sync**.

1. Wait until the Knoweledge Synchronization completes.

# Safeguard your Amazon Bedrock app with a guardrail
<a name="guardrails"></a>

Guardrails for Amazon Bedrock lets you implement safeguards for your Amazon Bedrock in SageMaker Unified Studio app based on your use cases and responsible AI policies. You can create multiple guardrails tailored to different use cases and apply them across multiple foundation models, providing a consistent user experience and standardizing safety controls across generative AI apps. You can configure denied topics to disallow undesirable topics and content filters to block harmful content in the prompts you send to a model and to the responses you get from a model. You can use guardrails with text-only foundation models. For more information, see [Safeguard your Amazon Bedrock app with a guardrail](#guardrails).

You can use guardrails with Amazon Bedrock in SageMaker Unified Studio chat agent app and with flow apps. With a chat agent app you can create guardrail component when you [create the chat agent app](create-chat-app-with-components.md#chat-app-add-guardrail) or you can add a guardrail component that you have previously created. For more information, see [Create an Amazon Bedrock guardrail component](creating-a-guardrail-component.md).

With a flow app, you can add a guardrail to [prompt]() nodes and to [knowledge base]() nodes. 

**Topics**
+ [Guardrail policies](#guardrails-filters)
+ [Create an Amazon Bedrock guardrail component](creating-a-guardrail-component.md)
+ [Add an Amazon Bedrock guardrail component to a chat agent app](add-guardrail-component-chat-app.md)
+ [Add an Amazon Bedrock guardrail component to a flow app](add-guardrail-component-flow-app.md)

## Guardrail policies
<a name="guardrails-filters"></a>

A guardrail consists of the following policies to avoid content that falls into undesirable or harmful categories.
+ Content filters – Adjust filter strengths to filter input prompts or model responses containing harmful content.
+ Denied topics – You can define a set of topics that are undesirable in the context of your app. These topics will be blocked if detected in user queries or model responses.

### Content filters
<a name="guardrails-studio-content-filters"></a>

Guardrails in Amazon Bedrock in SageMaker Unified Studio support the following content filters to detect and filter harmful user inputs and FM-generated outputs.
+ **Hate** – Describes language or a statement that discriminates, criticizes, insults, denounces, or dehumanizes a person or group on the basis of an identity (such as race, ethnicity, gender, religion, sexual orientation, ability, and national origin).
+ **Insults** – Describes language or a statement that includes demeaning, humiliating, mocking, insulting, or belittling language. This type of language is also labeled as bullying.
+ **Sexual** – Describes language or a statement that indicates sexual interest, activity, or arousal using direct or indirect references to body parts, physical traits, or sex.
+ **Violence** – Describes language or a statement that includes glorification of or threats to inflict physical pain, hurt, or injury toward a person, group or thing.

Content filtering depends on the confidence classification of user inputs and FM responses across each of the four harmful categories. All input and output statements are classified into one of four confidence levels (NONE, LOW, MEDIUM, HIGH) for each harmful category. For example, if a statement is classified as *Hate* with HIGH confidence, the likelihood of the statement representing hateful content is high. A single statement can be classified across multiple categories with varying confidence levels. For example, a single statement can be classified as *Hate* with HIGH confidence, *Insults* with LOW confidence, *Sexual* with NONE confidence, and *Violence* with MEDIUM confidence.

For each of the harmful categories, you can configure the strength of the filters. The filter strength determines the degree of filtering harmful content. As you increase the filter strength, the likelihood of filtering harmful content increases and the probability of seeing harmful content in your app reduces. The following table shows the degree of content that each filter strength blocks and allows.


****  

| Filter strength | Blocked content confidence | Allowed content confidence | 
| --- | --- | --- | 
| None | No filtering | None, Low, Medium, High | 
| Low | High | None, Low, Medium | 
| Medium | High, Medium | None, Low | 
| High | High, Medium, Low | None | 

### Denied topics
<a name="guardrails-topic-policies"></a>

Guardrails can be configured with a set of denied topics that are undesirable in the context of your generative AI app. For example, a bank may want their online assistant to avoid any conversation related to investment advice or engage in conversations related to fraudulent activities such as money laundering. 

You can define up to five denied topics. Input prompts and model completions will be evaluated against each of these topics. If one of the topics is detected, the blocked message configured as part of the guardrail will be returned to the user.

Denied topics can be defined by providing a natural language definition of the topic along with a few optional example phrases of the topic. The definition and example phrases are used to detect if an input prompt or a model completion belongs to the topic.

Denied topics are defined with the following parameters.
+ Name – The name of the topic. The name should be a noun phrase. Don't describe the topic in the name. For example:
  + **Investment Advice**
+ Definition – Up to 200 characters summarizing the topic content. The description should describe the content of the topic and its subtopics.
**Note**  
For best results, adhere to the following principles:  
Don't include examples or instructions in the description.
Don't use negative language (such as "don't talk about investment" or "no content about investment").

  The following is an example topic description that you can provide:
  + **Investment advice refers to inquires, guidance or recommendations regarding the management or allocation of funds or assets with the goal of generating returns or achieving specific financial objectives.**
+ Sample phrases – A list of up to five sample phrases that refer to the topic. Each phrase can be up to 1,000 characters. An sample is a prompt or continuation that shows what kind of content should be filtered out. For example:
  + **Is investing in the stocks better than bonds?**
  + **Should I invest in gold?**

# Create an Amazon Bedrock guardrail component
<a name="creating-a-guardrail-component"></a>

You can create a guardrail as a component in an Amazon Bedrock in SageMaker Unified Studio project. You can then add the guardrail component to a chat agent app. You can also create a guardrail component while you are creating a chat agent app. For an example, see [Step 2: Add a guardrail to your chat agent app](create-chat-app-with-components.md#chat-app-add-guardrail).

**To create a guardrail component**

1. Navigate to the Amazon SageMaker Unified Studio landing page by using the URL from your administrator.

1. Access Amazon SageMaker Unified Studio using your IAM or single sign-on (SSO) credentials. For more information, see [Access Amazon SageMaker Unified Studio](getting-started-access-the-portal.md).

1. Choose the **Build** menu at the top of the page.

1. In the **MACHINE LEARNING & GENERATIVE AI** section, choose **My apps**.

1. In the **Select or create a new project to continue** dialog box, select the project that you want to use.

1. In the left pane, choose **Asset gallery**.

1. Choose **My components**.

1. In the **Components** section, choose **Create component** and then **Guardrail**. The **Create guardrail** pane is shown.

1. For **Guardrail name**. enter a name for the guardrail.

1. For **Guardrail description** enter a description for the guardrail.

1. In **Content filters** do the following.

   1. Select **Enable content filters** to turn on content filtering. 

   1. For **Filter for prompts**, choose the filters that you want to apply to prompts. For more information, see [Content filters](guardrails.md#guardrails-studio-content-filters).

   1. If you want the filter to apply to responses that the model generates, select **Apply the same filters for responses**.

1. In **Blocked messsaging** do the following:

   1. For **Blocked messaging for prompts** enter a message to display when the guardrail blocks content in the prompt. 

   1. If you want to show a different message when the guardrail blocks content from a model's response, do the following:

      1. Clear **Apply the same message for blocked responses**.

      1. For **Blocked messaging for responses**, enter a message to display when the guardrail blocks content in the response from the model.

1. Add a denied topic filter by doing the following: 

   1. Choose **Use advanced features**.

   1. Choose **Denied topics**.

   1. Choose **Add topic**.

   1. For **Name**, enter a name for the filter.

   1. For **Definition for topic**, enter a definition for the content that you want to deny.

   1. (Optional) To help guide the guardrail, do the folllowing: 

      1. Choose **Sample phrases - optional**.

      1. For **Sample phrases**, enter a phrase.

      1. Choose **Add phrase**.

      1. Add up to four more phrases by repeating the previous two steps.

      1. Choose **Save**.

   For information about denied topics, see [Denied topics](guardrails.md#guardrails-topic-policies).

1. Choose **Create** to create the guardrail.

1. Add the guardrail component to a chat agent app by doing [Add an Amazon Bedrock guardrail component to a chat agent app](add-guardrail-component-chat-app.md).

# Add an Amazon Bedrock guardrail component to a chat agent app
<a name="add-guardrail-component-chat-app"></a>

In this procedure, you add a guardrail component to an existing [chat agent app](create-chat-app.md).

1. Navigate to the Amazon SageMaker Unified Studio landing page by using the URL from your administrator.

1. Access Amazon SageMaker Unified Studio using your IAM or single sign-on (SSO) credentials. For more information, see [Access Amazon SageMaker Unified Studio](getting-started-access-the-portal.md).

1. If the project that you want to use isn't already open, do the following:

   1. Choose the current project at the top of the page. If a project isn't already open, choose **Select a project**.

   1. Select **Browse all projects**. 

   1. In **Projects** select the project that you want to use.

1. Choose the **Build** menu option at the top of the page.

1. In **MACHINE LEARNING & GENERATIVE AI** choose **My apps**.

1. In **Apps** choose the chat agent app that you want to add the guardrail to.

1. In the **Configs** pane, choose **Guardrails**.

1. For **Guardrails**, select the guardrail component that you created in [Create an Amazon Bedrock guardrail component](creating-a-guardrail-component.md).

1. (Optional) Preview the guardail by choosing **Preview**. From the preview you can edit the guardrail, if desired.

1. Choose **Save** to save your changes.

# Add an Amazon Bedrock guardrail component to a flow app
<a name="add-guardrail-component-flow-app"></a>

In this procedure, you add a guardrail component to the [knowledge base](nodes.md#flow-node-kb) node that you create in step 2 of [Create a flow app with Amazon Bedrock](build-flow.md). You can also add a guardrail to a [prompt](nodes.md#flow-node-prompt) node. To add a guardrail to a prompt node, use the following steps. For step 7, select the prompt node (**Playlist\$1generator\$1node**).

1. Navigate to the Amazon SageMaker Unified Studio landing page by using the URL from your administrator.

1. Access Amazon SageMaker Unified Studio using your IAM or single sign-on (SSO) credentials. For more information, see [Access Amazon SageMaker Unified Studio](getting-started-access-the-portal.md).

1. If the project that you want to use isn't already open, do the following:

   1. Choose the current project at the top of the page. If a project isn't already open, choose **Select a project**.

   1. Select **Browse all projects**. 

   1. In **Projects** select the project that you want to use.

1. Choose the **Build** menu option at the top of the page.

1. In **MACHINE LEARNING & GENERATIVE AI** choose **My apps**.

1. open the flow app that you created in [Create a flow app with Amazon Bedrock](build-flow.md).

1. In the canvas, select the knowledge base node (**Local\$1bands\$1knowledge\$1base**).

1. In the **flow builder** pane, choose the configure tab.

1. In the **Guardrail details** section do one of the following: 
   + Select an existing guardrail to use
   + Choose **Create a new guardrail** to create a new guardrail. Then do the following: 

     1. For **Guardrail name**. enter a name for the guardrail.

     1. For **Guardrail description** enter a description for the guardrail.

     1. In **Content filters** do the following.

        1. Select **Enable content filters** to turn on content filtering. 

        1. For **Filter for prompts**, choose the filters that you want to apply to prompts. For more information, see [Content filters](guardrails.md#guardrails-studio-content-filters).

        1. If you want the filter to apply to responses that the model generates, select **Apply the same filters for responses**.

     1. In **Blocked messsaging** do the following:

        1. For **Blocked messaging for prompts** enter a message to display when the guardrail blocks content in the prompt. 

        1. If you want to show a different message when the guardrail blocks content, do the following:

           1. Clear **Apply the same message for blocked responses**.

           1. For **Blocked messaging for responses**, enter a message to display when the guardrail blocks content in the response from the model.

     1. Add a denied topic filter by doing the following: 

        1. Choose **Use advanced features**.

        1. Choose **Denied topics**.

        1. Choose **Add topic**.

        1. For **Name**, enter a name for the filter.

        1. For **Definition for topic**, enter a definition for the content that you want to deny.

        1. (Optional) To help guide the guardrail, do the folllowing: 

           1. Choose **Sample phrases - optional**.

           1. For **Sample phrases**, enter a phrase.

           1. Choose **Add phrase**.

           1. Add up to four more phrases by repeating the previous two steps.

           1. Choose **Save**.

        For information about denied topics, see [Denied topics](guardrails.md#guardrails-topic-policies).

     1. Choose **Create** to create the guardrail.

     1. Add the guardrail component to a chat agent app by doing [Add an Amazon Bedrock guardrail component to a chat agent app](add-guardrail-component-chat-app.md).

1. Choose **Save** to save your changes to your flow app.

1. Test your prompt by doing the following:

   1. On right side of the app flow page, choose **<** to open the test pane.

   1. In **Enter prompt**, enter a phrase that violates your guardrail.

   1. Press Enter on the keyboard or choose the run button to test the prompt. 

   1. In the response you should see the text **guardrail applied** under the affected prompt. Choose the prompt to see the reason why the guardrail rejected the prompt.

# Call functions from your Amazon Bedrock chat agent app
<a name="functions"></a>

Amazon Bedrock in SageMaker Unified Studio functions let a model include information that it has no previous knowledge of in its response. For example, you can use a function to include dynamic information in a model's response such as a weather forecast, sports results, or traffic conditions. 

In Amazon Bedrock in SageMaker Unified Studio, a function calls an API hosted outside of Amazon Bedrock in SageMaker Unified Studio. You either create the API yourself, or use an existing API. To create an API, you can use [ Amazon API Gateway](https://docs.aws.amazon.com/apigateway/). 

To use a function in Amazon Bedrock in SageMaker Unified Studio you add a *function component* to your app. As part of the function, you define an OpenAPI schema for the API that you want the model to call. You also specify how to authenticate the call to the API. When a model receives a prompt, it uses the schema and the prompt to determine if an API should be called and the parameters that the API should receive. If the API is called, the response from the model includes the output from the API. 

APIs that you call in a function must have a response size that is less than 20K.

When add a function to an app, you need to specify the app's system instruction. The system instruction needs to be at least 40 characters long and should mention the new skills that the new function introduces. 

You can use functions in a [chat agent app](create-chat-app.md). 

**Topics**
+ [Function schema](#functions-schema)
+ [Authentication methods](#functions-authentication)
+ [Create an Amazon Bedrock function component](creating-a-function-component.md)
+ [Add a function component to an Amazon Bedrock chat agent app](add-function-component-chat-app.md)

## Function schema
<a name="functions-schema"></a>

Amazon Bedrock in SageMaker Unified Studio has the following requirements for the schema that you use to create a function.
+ The function schema must be [OpenAPI version 3.0.0](https://spec.openapis.org/oas/v3.0.0.html).
+ The function schema must be in JSON or YAML format.
+ The function can have no authentication, API key authentication, Bearer token authentication, or basic authentication. For more information, see [Authentication methods](#functions-authentication).
+ You can have 0 or 1 server URL.
+ All [Operation Objects](https://swagger.io/specification/v3/#operation-object) must have a description.
+ All [Parameter Objects](https://swagger.io/specification/v3/#parameter-object) must have a description.
+ [Security scheme object](https://swagger.io/specification/v3/#security-scheme-object) must have a type that is either `apiKey` or `http`.

  When the type is `http`, the scheme field must either be `basic` or `bearer`.

  When the type is `apiKey`, the `in` property must be `query` or `header`. Also, the `name` property must be defined.
+ Amazon Bedrock in SageMaker Unified Studio only honors [globally-scoped security requirement](https://swagger.io/specification/v3/#security-requirement-object). For more information, see [Valid components for globally-scoped security requirements](#example-components). 
+ Parameters (**parameter.in**) must be pass passed through query or path. You can't use cookies or headers to pass parameters.
+ Parameters (**parameter schema type**) must be primitive types, arrays, or objects (one-level JSON). You can't pass complex nested objects.
+ Parameter content (**parameter.content**) is mutually exclusive with the schema. Schema is more commonly used. Use content only for more complex types, or for complex serialization scenarios that are not covered by style and explode. 
+ Parameter **style** and **explode** values. `form` and `true` for query, `simple` and `false` for paths). For more information, see [Parameter Serialization](https://swagger.io/docs/specification/serialization/).
+ Request body content must be passed as `application/json`.
+ The schema can have up to 5 APIs and an app can use up to 5 APIs across all functions. For the model to correctly choose function, it is important to provide detailed descriptions of the API, including parameters, properties, and responses. 

### Valid components for globally-scoped security requirements
<a name="example-components"></a>

Amazon Bedrock in SageMaker Unified Studio only honors [globally-scoped security requirements](https://swagger.io/specification/v3/#security-requirement-object). That is, Amazon Bedrock in SageMaker Unified Studio ignores security requirements indicated in operation objects.

When the requirement array contains a security scheme object with type `http` and scheme of `bearer` or `basic`, the array must contain a single entry. Amazon Bedrock in SageMaker Unified Studio ignores further entries. 

When the requirement array contains a security scheme object with type `apiKey`, you can have a maximum of 2 entries.

For example, if you have the following [components](https://swagger.io/specification/v3/#components-object):

```
"components": {
  "securitySchemes": {
    "api_key_1": {
      "type": "apiKey",
      "name": "appid1",
      "in": "query"
    },
    "api_key_2": {
      "type": "apiKey",
      "name": "appid2",
      "in": "header"
    },
    "api_key_3": {
      "type": "apiKey",
      "name": "appid3",
      "in": "cookie"
    },
    "bearer_1": {
      "type": "http",
      "scheme": "bearer",
    },
    "bearer_2": {
      "type": "http",
      "scheme": "bearer",
    },
    "basic_1": {
      "type": "http",
      "scheme": "basic",
    },
    "basic_2": {
      "type": "http",
      "scheme": "basic",
    },
    "http_digest": {
      "type": "http",
      "scheme": "digest"
    },
    "oauth2_1": {
      "type": "oauth2"
    }
  }
}
```

The following are valid:

```
# 1 API key
"security": [
  {
    "api_key_1": []
  }
],

# 2 API keys
"security": {
  {
    "api_key_1": [],
    "api_key_2": []
  }
}

# Bearer
"security": {
   "bearer_1": []
}

# Basic
"security": {
   "basic_1": []
}
```

The following are invalid:

```
# Invalid: `type` must only be `apiKey` or `http`
"security": {
  "oauth2_1": []
}

# Invalid: `scheme` must only be `basic` or `bearer` if `type` is `http`
"security": {
  "http_digest": []
}

# Invalid: `security` must only contain 1 entry if `type` is `basic` or `bearer`
"security": {
  "basic_1": [],
  "basic_2": []
}

# Invalid: `security` must not contain varying security types
"security": {
  "api_key_1": [],
  "basic_1": []
}

# Invalid: API key must only have `in` property set to `header` or `query` 
"security": {
  "api_key_1": [],
  "api_key_3": []
}

# Invalid: `security` must not have more than 2 API keys
"security": {
  {
    "api_key_1": [],
    "api_key_2": [],
    "api_key_3": []
  }
}
```

## Authentication methods
<a name="functions-authentication"></a>

Amazon Bedrock in SageMaker Unified Studio supports the following methods for authenticating function calls to an API server. If you authenticate a function call, make sure the credentials you provide are correct as Amazon Bedrock in SageMaker Unified Studio doesn't verify the credentials before you use them in a function call.
+ **No authentication** – No authentication means that the client doesn't need to provide any credentials to access a resource or service. This method is typically used for publicly available resources that don't require any form of authentication.
+  ** [API keys](https://swagger.io/docs/specification/authentication/api-keys/)** – An API key is a unique identifier used to authenticate a client application and allow it to access an API or service. You can add a maximum of two keys.
+ ** [Bearer token](https://swagger.io/docs/specification/authentication/bearer-authentication/)** – A bearer token is an opaque string that represents an authentication credential. It is typically obtained after a successful authentication process, such as OAuth 2.0. This method allows the client to access protected resources without having to send the actual credentials (username and password) with each request.
**Note**  
Amazon Bedrock in SageMaker Unified Studio is unable to assure whether the token is valid or has already expired. It is your responsibility to make sure that you provide a valid token, and to update the token to a new one before it expires. If the token expires, Amazon Bedrock won't be able to successfully call APIs with the token.
+ **[Basic authentication](https://swagger.io/docs/specification/authentication/basic-authentication/)** – Basic authentication is a simple authentication scheme built into the HTTP protocol. The credentials are sent with every request, which can be a security concern if the connection is not secured using HTTPS. Basic authentication is generally considered less secure than other modern authentication methods and should be used with caution, especially in production environments.

# Create an Amazon Bedrock function component
<a name="creating-a-function-component"></a>

You can create a function as a component in an Amazon Bedrock in SageMaker Unified Studio project. If you are creating an app, you can can also create a function when you configure the app. 

**To create a function component**

1. Navigate to the Amazon SageMaker Unified Studio landing page by using the URL from your administrator.

1. Access Amazon SageMaker Unified Studio using your IAM or single sign-on (SSO) credentials. For more information, see [Access Amazon SageMaker Unified Studio](getting-started-access-the-portal.md).

1. Choose the **Build** menu at the top of the page.

1. In the **MACHINE LEARNING & GENERATIVE AI** section, choose **My apps**.

1. In the **Select or create a new project to continue** dialog box, select the project that you want to use.

1. In the left pane, choose **Asset gallery**.

1. Choose **My components**.

1. In **Asset gallery**, choose **My components**.

1. In the **Components** section, choose **Create component** and then **Function**. The **Create function** pane is shown.

1. For **Function name**, enter a name for the function in in **Function name**.

1. For **Function description**, enter a description for the function. 

1. For **Function schema**, enter the JSON or YAML format OpenAPI schema for the API. Alternatively, upload the JSON or YAML for the file by choosing **Import JSON/YAML**. You can clear the text box by choosing **Reset**.

1. Choose **Validate schema** to validate the schema. 

1. For **Authentication method** select the authentication method for your API server. By default, Amazon Bedrock in SageMaker Unified Studio preselects the authentication based on information it finds in your OpenAPI schema. For information about authentication methods, see [Authentication methods](functions.md#functions-authentication). 

1. Enter the information for the authention method that you selected in the previous step.

1. For **API servers**, enter the URL for your server in **Server URL**. This value is autopopulated if the server URL is in the schema.

1. Choose **Create** to create your function.

1. Add your function to a chat agent app by doing [Add a function component to an Amazon Bedrock chat agent app](add-function-component-chat-app.md). 

# Add a function component to an Amazon Bedrock chat agent app
<a name="add-function-component-chat-app"></a>

In this procedure, you add a function component to an existing [chat agent app](create-chat-app.md). You can add up to 5 functions to an app. For each function you add, be sure to update the system instruction with information about the function.

**To add a function component to a chat agent app**

1. Navigate to the Amazon SageMaker Unified Studio landing page by using the URL from your administrator.

1. Access Amazon SageMaker Unified Studio using your IAM or single sign-on (SSO) credentials. For more information, see [Access Amazon SageMaker Unified Studio](getting-started-access-the-portal.md).

1. If the project that you want to use isn't already open, do the following:

   1. Choose the current project at the top of the page. If a project isn't already open, choose **Select a project**.

   1. Select **Browse all projects**. 

   1. In **Projects** select the project that you want to use.

1. Choose the **Build** menu option at the top of the page.

1. In **MACHINE LEARNING & GENERATIVE AI** choose **My apps**.

1. In **Apps** choose the chat agent app that you want to add the function component to.

1. In the **Configs** pane, do the following:

   1. For **Enter a system instruction**, enter or update the system prompt so that it describes the function.

   1. Choose **Functions**.

   1. For **Functions**, select the function component that you created in [Create an Amazon Bedrock function component](creating-a-function-component.md). 

1. Choose **Save** to save your changes.

# Use app history to view and restore versions of an Amazon Bedrock app
<a name="app-history"></a>

As you develop an Amazon Bedrock in SageMaker Unified Studio app (chat agent app or flow app), you make changes and improvements. When you save an app, Amazon Bedrock in SageMaker Unified Studio saves the current draft of the app (including its configuration information), as a version in the *app history*. At some point, you might want to view and restore a previous version of an app. For example, you might want to check the guardrail that a previous version of a chat agent app uses, or you might want continue development starting from a previous version of the app. You can use the app history to view and restore previous app versions.

Within the app history, you can restore a previous version of the app. If you don't save the current draft before restoring a previous version, Amazon Bedrock in SageMaker Unified Studio automatically saves the draft for you. While you are viewing the app history for an app, you can't make changes to the app. 

**To view or restore a previous version of an app**

1. Navigate to the Amazon SageMaker Unified Studio landing page by using the URL from your administrator.

1. Access Amazon SageMaker Unified Studio using your IAM or single sign-on (SSO) credentials. For more information, see [Access Amazon SageMaker Unified Studio](getting-started-access-the-portal.md).

1. If the project that you want to use isn't already open, do the following:

   1. Choose the current project at the top of the page. If a project isn't already open, choose **Select a project**.

   1. Select **Browse all projects**. 

   1. In **Projects** select the project that you want to use.

1. Choose the **Build** menu option at the top of the page.

1. In **MACHINE LEARNING & GENERATIVE AI** choose **My apps**.

1. In **Apps** choose the app that you want to use.

1. On the **Save** button, choose the menu selector and then select **View history**. 

1. In the **History pane**, select the version of the app that you want to view. The center pane refreshes to contain the selected app version.

1. In the center pane, view the app version and its configuration. You can't make changes to app. To return to the editable draft, choose **Close App history**.

1. (Optional) In the **App history** pane, choose **Restore** to restore the app version. After restoring the app, Amazon Bedrock in SageMaker Unified Studio closes the app history pane and you can edit the restored app version.

   If you don't save the current draft before restoring a previous version, Amazon Bedrock in SageMaker Unified Studio automatically saves the draft for you.

# Use your app outside of Amazon SageMaker Unified Studio
<a name="app-export"></a>

With Amazon Bedrock in SageMaker Unified Studio, you can export the files for an [chat agent app](create-chat-app.md) and a [flow app](create-flows-app.md). This lets you can use the app outside of Amazon SageMaker Unified Studio. 

When you export an app, Amazon Bedrock in SageMaker Unified Studio exports a zip file with the AWS CloudFormation templates and other files required by your app. To use your app, you need to deploy the CloudFormation templates to an AWS account. The actual contents of the zip file vary on the Amazon Bedrock in SageMaker Unified Studio components that your app uses. After uncompressing the zip file, you deploy the contents of the zip file into your AWS account (or another AWS account, if you prefer). 

**Important**  
Once you export your app, it's your responsibility to audit the app files and make sure they are correct. You can use the CloudFormation templates as you wish.

An app can include one or more different types of Amazon Bedrock in SageMaker Unified Studio components. For example, a chat agent app could use a guardrail or a knowledge base. When you deploy your app's components, Amazon Bedrock in SageMaker Unified Studio only deploys the AWS infrastructure files. The data source files for a knowledge base and the secrets for a function aren't exported, and you have to configure them during the deployment. After deploying the app to an AWS account, you can run the app as a Node.js app. 

## App export files
<a name="app-export-files"></a>

Depending on the composition of your app, the zip package contains some or all of the following files:
+ **README.md** — Instructions for deploying and running your app.
+ **function-stack-\$1.json** — CloudFormation template that creates your function component, if any. This includes:
  + An AWS Lambda [function](https://docs.aws.amazon.com/bedrock/latest/studio-ug/functions.html) for calling the API defined in your OpenAPI schema.
  + An AWS Secret Manager secret for storing credentials to use when calling your API. This secret contains an empty value, and you are expected to update this secret manually. 
+ **knowledge-base-stack-\$1.json** — AWS CloudFormation template that creates your [Knowledge Base data source](https://docs.aws.amazon.com/bedrock/latest/studio-ug/data-sources.html#data-source-document), if any. This includes an Knowledge Base for Amazon Bedrock configured with your selected data store and vector store. This knowledge base will not have the data you have uploaded in to Amazon Bedrock in SageMaker Unified Studio, and you are expected to provide data files manually.
+ **flow-stack.json** — CloudFormation template that creates an Amazon Bedrock flows resource.
+ **guardrails-stack-\$1.json** — CloudFormation template that creates a [guardrail](https://docs.aws.amazon.com/bedrock/latest/studio-ug/guardrails.html) for Amazon Bedrock, if any.
+ **agent-stack.json** — CloudFormation template that creates an Amazon Bedrock Agent, if any.
+ **invocation-policy-\$1.json** — CloudFormation template that creates an IAM policy with the runtime permissions that you need to talk to your deployed chat agent app.
+ ** br-studio-app-stack-\$1.json** — Parent stack that orchestrates the deployment of all AWS CloudFormation stacks included in the zip package.
+ **deploy-app.sh** — Helper script that you use to deploy your app infrastructure into your AWS account.
+ **code-snippet.mjs** — Example code snippet that you embed in your code to invoke the app.
+ **amazon-bedrock-ide-app.mjs** — Standalone Node.js module to quickly test your deployed app.
+ **aoss-encryption-policy-\$1.json** — AOSS encryption policy necessary to use a Knowledge Base. This encryption policy is automatically created when your chat agent app contains an Amazon Bedrock in SageMaker Unified Studio Knowledge Base.
+ **provisioning-inline-policy.json** — An example of an AWS IAM policy that contains the permissions required to provision the chat agent app resources. The permissions declared in this policy file are needed when deploying the AWS CloudFormation stacks. 

  You can modify this policy to better suit your needs. You may create a new IAM principal with these policies, or attach these policies to an existing IAM principal in your AWS account. 
+ ** kms-key-policy.json** — An example of an AWS KMS key policy that contains required permissions for encrypting your chat agent app resources.

  You can modify this key policy to better suit your needs. You may create a new KMS key with this policy, or attach this policy to an existing KMS key in your AWS account.
+ **api-schema\$1.json** — OpenAPI schema files associated with your function components, if any.

**Topics**
+ [App export files](#app-export-files)
+ [Export your Amazon Bedrock app](app-export-chat-app.md)
+ [Deploy an exported Amazon Bedrock app](app-deploy-app.md)
+ [Run a deployed Amazon Bedrock app](app-run-app.md)

# Export your Amazon Bedrock app
<a name="app-export-chat-app"></a>

Use the following procedure to export a chat agent app or a flow app to a zip file. You can then use the app outside of Amazon SageMaker Unified Studio.

**To export a chat agent app or a flow app**

1. Navigate to the Amazon SageMaker Unified Studio landing page by using the URL from your administrator.

1. Access Amazon SageMaker Unified Studio using your IAM or single sign-on (SSO) credentials. For more information, see [Access Amazon SageMaker Unified Studio](getting-started-access-the-portal.md).

1. If the project that you want to use isn't already open, do the following:

   1. Choose the current project at the top of the page. If a project isn't already open, choose **Select a project**.

   1. Select **Browse all projects**. 

   1. In **Projects** select the project that you want to use.

1. Choose the **Build** menu option at the top of the page.

1. In **MACHINE LEARNING & GENERATIVE AI** choose **My apps**.

1. In **Apps** choose the app that you want to export.

1. If you haven't already, choose **Save** to save the app. You can't export an app unless you first save and run the app. 

1. On the app page, choose **Export** to export the app. Amazon Bedrock in SageMaker Unified Studio will create and download a zip file with the name **amazon-bedrock-ide-app-export-\$1.zip**.

1. Next step: [Deploy the app](app-deploy-app.md).

# Deploy an exported Amazon Bedrock app
<a name="app-deploy-app"></a>

The following instructions show you the steps you take to deploy a chat agent app that you [export](app-export-chat-app.md) from Amazon Bedrock in SageMaker Unified Studio. Make sure to 

**Topics**
+ [Prerequisites for deploying an exported app](#app-deploy-app-prerequisites)
+ [Deploy the exported app](#app-deploy-app-deploy)

## Prerequisites for deploying an exported app
<a name="app-deploy-app-prerequisites"></a>

Before you can deploy a chat agent app that you have exported, you must first do the following:

**To prepare for app deployment**

1. Install the latest version of the AWS CLI on your local machine by following the instructions at [Install or update to the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html).

1. Set up AWS credentials for the AWS CLI on your local machine by following the instructions at [Configure the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html). The credentials that the deployment script uses will follow the [order of precedence](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html#configure-precedence).

1. (Optional) Using the AWS account that you set up in step 2, create an AWS KMS key for app export by following the instructions at [Creating keys](https://docs.aws.amazon.com/kms/latest/developerguide/create-keys.html). The key must be tagged with key `EnableBedrock` and a value of `true`. The key must also have a key policy that allows it to be used for encryption of your chat agent app resources. You may use the suggested policy declared in the `kms-key-policy.json` file of your zip package.

1. Create an Amazon S3 bucket to hold the app files that you export by following the instructions at [Creating a bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/create-bucket-overview.html). Make sure the bucket is in the same AWS Region as the app that you are deploying. 

1. Create an IAM role that includes the policies from `provisioning-inline-policy.json`. For information about creating a role, see [IAM role creation](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create.html).

1. If your app includes a Knowledge Base, copy the data source file to a folder named `data/` in the Amazon S3 bucket that you created in step 4. If your app uses a document as a datasource, you supply a list of datasource files to the deployment script. For more information, see [Deploy the exported app](#app-deploy-app-deploy).

1. If your app calls a function that requires authorization, update the function environment secret in Amazon SageMaker AI to the authorization method that your function uses. Run the following command: 

   ```
   aws secretsmanager update-secret \
     --secret-id br-studio/function-name-export-environment-id \
     --secret-string 'secret-value'
   ```

   To get the `function-name` and `export-environment-id` values, open the *amazon-bedrock-ide-app-stack-*nnnn*.json* file from the files that you exported in [Export your Amazon Bedrock app](app-export-chat-app.md). The values are in the `FunctionsStack0` JSON object.

   Replace the following values:
   + `function-name` — to the value of the `functionName` field in the `FunctionsStack0` JSON object.
   + `export-environment-id` — to the value of the `exportAppInstanceId` field in the `FunctionsStack0` JSON object. 
   + `secret-value` — to the intended value to be used for authentication. You specified the authentication type when you [created the function component](creating-a-function-component.md). Use the authentication values that you specified to complete the `secret-value`.

      If the function requires API Keys, the syntax of `secret-value` should be: `{"key-name-1":"key-value-1","key-name-2":"key-value-2"}` 

     If the function requires Basic authentication, the syntax of `secret-value` should be: `{"___AuthType___":"BASIC", "username":"username-value", "password":"password-value"}` 

     If the function requires Bearer token authentication, the syntax of `secret-value` should be: `{"___AuthType___":"BEARER", "tokenValue":"token-value"}`

1. Next step: [Deploy the exported app](#app-deploy-app-deploy).

## Deploy the exported app
<a name="app-deploy-app-deploy"></a>

Before deploying your chat agent app, be sure to do the [prerequisite steps](#app-deploy-app-prerequisites).

Deploying a chat agent app deploys the AWS infrastructure files that you need to run the app in AWS. 

**To deploy an exported app**

1. At the command prompt, do the following:

   1. Navigate to the zip file that you exported from Amazon Bedrock in SageMaker Unified Studio. 

   1. Assume the role of the AWS that you created in step 3 of [Prerequisites for deploying an exported app](#app-deploy-app-prerequisites). 

   1. Use the following command to make sure the deployment script (`deployApp.sh`) is executable:

      ```
      chmod +x deployApp.sh
      ```

   1. Run the deployment script with the following command:

      ```
      ./deployApp.sh \
          [--awsRegion=value] \
          [--s3BucketName=value] \
          [--assetsS3Path=value] \
          [--kmsKeyArn=value] \
          [--dataFiles=value]
      ```

      Replace the following values:
      + `awsRegion` — with the AWS Region that you want to deploy the app to. Amazon Bedrock must be available in the Region you use. For more information, see [Supported AWS Regions](https://docs.aws.amazon.com/bedrock/latest/userguide/bedrock-regions.html).
      + `s3BucketName` — With the Amazon S3 bucket that you created in step 5 of [Prerequisites for deploying an exported app](#app-deploy-app-prerequisites). The deployment store the CFN templates and application data files in this bucket. 
      + `assetsS3Path` — (Optional) With the path in `s3BucketName` that you want deployment to store application files to. 
      + `kmsKeyArn` — (Optional) with the ARN of the KMS Key that you created in step 3 of [Prerequisites for deploying an exported app](#app-deploy-app-prerequisites).
      + `dataFiles` — With a comma-separated list of data source file paths. Required for apps that use a document data source.

      For example, if you have a chat agent app with a single document as a data source, and you want to deploy the app with encryption, you can use the following command.

      ```
      ./deployApp.sh \
          --awsRegion=us-east-1 \
          --s3BucketName=my-s3-bucket-name-for-exported-chat-apps \
          --assetsS3Path=my-prod-folder/my-chat-app \
          --kmsKeyArn=arn:aws:kms:us-east-1:111122223333:key/11111111-2222-3333-4444-555555555555 \
          --dataFiles=my-data-source.pdf
      ```

1. (Optional) Monitor the deployment in the AWS CloudFormation console.

1. Note the output from the script. You need it to run the chat agent app. It should be similar to: `node amazon-bedrock-ide-app.mjs --question="prompt" --region="AWS Region"`. 

   When you run the app, specify the following parameters:
   + `question` – The prompt that you want to start the app with. 
   + `region` – The AWS Region that you deployed the app to. Use the value of `awsRegion` that you specified in step 1c.

   For example, `node amazon-bedrock-ide-app.mjs --question="Tell me about my documents" --region="us-east-1"`

1. Next step: [Run a deployed Amazon Bedrock app](app-run-app.md).

# Run a deployed Amazon Bedrock app
<a name="app-run-app"></a>

The following instructions show you the steps you take to run a deployed Amazon Bedrock in SageMaker Unified Studio chat agent app.

**Topics**
+ [Prerequisites for running a chat agent app](#app-run-app-prerequisites)
+ [Run the app](#app-deploy-app-run)

## Prerequisites for running a chat agent app
<a name="app-run-app-prerequisites"></a>

Before you can run an app that you have exported, you must first do the following:

**To prepare for running an app**

1. Download and install Node.js. For more information, see [Download Node.js](https://nodejs.org/en/download/package-manager).

1. At the command prompt, install third-party Node.js libraries by running the following commands:

   ```
   npm install minimist
   npm install aws-sdk
   npm install @aws-sdk/credential-providers
   npm install @aws-sdk/client-bedrock-agent-runtime
   npm install @aws-sdk/client-bedrock-runtime
   ```

   For a flow app you also need the following

   ```
   npm install @aws-sdk/client-bedrock-agent
   ```

1. Create or update an IAM role in which you want to run the app. For the policy, use the policy created by `deployApp.sh` when you exported the app. The policy name is `BRStudioExportedAppInvocationRolePolicy-exportProjectId`. The policy is declared in invocation-policy-\$1.json. For more information, see [Creating roles](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create.html).

## Run the app
<a name="app-deploy-app-run"></a>

To run your app, you need an IAM role with with permissions to invoke Amazon Bedrock resources. When you deploy the app, the CloudFormation stack deployed through `deployApp.sh` script provisions a suitable policy in your AWS account (declared in `invocation-policy-*.json`).

**To run the app**

1. Switch to the IAM role that you created in step 3 of [Prerequisites for running a chat agent app](#app-run-app-prerequisites).

1. Run the app by entering the command you noted in step 3 of [Deploy the exported app](app-deploy-app.md#app-deploy-app-deploy).