

本文属于机器翻译版本。若本译文内容与英语原文存在差异，则一律以英文原文为准。

# Amazon Titan 模型
<a name="model-parameters-titan"></a>

本部分介绍 Amazon Titan 模型的请求参数和响应字段。使用这些信息，您可以通过 [InvokeModel](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InvokeModel.html) 和 [InvokeModelWithResponseStream](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InvokeModelWithResponseStream.html)（流式传输）操作对 Amazon Titan 模型进行推理调用。本部分还包括 Python 代码示例，展示了如何调用 Amazon Titan 模型。要在推理操作中使用模型，您需要相关模型的模型 ID。要获取模型 ID，请参阅 [Amazon Bedrock 中支持的根基模型](models-supported.md)。有些模型还能与 [Converse API](conversation-inference.md) 配合使用。要查看 Converse API 是否支持特定的 Amazon Titan 模型，请参阅[支持的模型和模型功能](conversation-inference-supported-models-features.md)。有关更多代码示例，请参阅 [使用 Amazon Bedrock 的代码示例 AWS SDKs](service_code_examples.md)。

Amazon Bedrock 中的基础模型支持输入和输出模态，这些模态因模型而异。要查看 Titan 模型支持的模态，请参阅 [Amazon Bedrock 中支持的根基模型](models-supported.md)。要查看 Titan 模型支持哪些 Amazon Bedrock 功能，请参阅 [Amazon Bedrock 中支持的根基模型](models-supported.md)。要查看 Amazon Titan 模型在哪些 AWS 区域中可用，请参阅 [Amazon Bedrock 中支持的根基模型](models-supported.md)。

使用 Amazon Titan 模型进行推理调用时，您可以为模型创建提示。有关为 Amazon Bedrock 支持的模型创建提示的一般信息，请参阅 [提示工程概念](prompt-engineering-guidelines.md)。

**Topics**
+ [Amazon Titan Text 模型](model-parameters-titan-text.md)
+ [Amazon Titan 图像生成器 G1 模型](model-parameters-titan-image.md)
+ [Amazon Titan Embeddings G1 - Text](model-parameters-titan-embed-text.md)
+ [Amazon Titan Multimodal Embeddings G1](model-parameters-titan-embed-mm.md)

# Amazon Titan Text 模型
<a name="model-parameters-titan-text"></a>

Amazon Titan Text 模型支持以下推理参数。

有关 Titan Text 提示工程准则的更多信息，请参阅 [Titan Text Prompt Engineering Guidelines](https://d2eo22ngex1n9g.cloudfront.net/Documentation/User+Guides/Titan/Amazon+Titan+Text+Prompt+Engineering+Guidelines.pdf)。

有关 Titan 模型的更多信息，请参阅 [Amazon Titan 模型概述](titan-models.md)。

**Topics**
+ [请求和响应](#model-parameters-titan-request-response)
+ [代码示例](#inference-titan-code)

## 请求和响应
<a name="model-parameters-titan-request-response"></a>

请求正文在 [InvokeModel](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InvokeModel.html) 或 [InvokeModelWithResponseStream](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InvokeModelWithResponseStream.html) 请求的 `body` 字段中传递。

------
#### [ Request ]

```
{
    "inputText": string,
    "textGenerationConfig": {
        "temperature": float,  
        "topP": float,
        "maxTokenCount": int,
        "stopSequences": [string]
    }
}
```

以下参数为必需参数：
+ **inputText** – 为生成响应提供模型的提示。要以对话方式生成响应，请使用以下格式提交提示：

  ```
  "inputText": "User: <theUserPrompt>\nBot:"
  ```

  该格式指示模型，应在用户提供提示后在新行中进行响应。

`textGenerationConfig` 是可选项。您可以使用它来配置以下[推理参数](inference-parameters.md)：
+ **temperature** – 使用较低的值可降低响应中的随机性。  
****    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/zh_cn/bedrock/latest/userguide/model-parameters-titan-text.html)
+ **topP** – 使用较低的值可忽略可能性较低的选项并减少响应的多样性。  
****    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/zh_cn/bedrock/latest/userguide/model-parameters-titan-text.html)
+ **maxTokenCount** – 指定要在生成的响应中使用的最大词元数。最大词元限制会被严格执行。  
****    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/zh_cn/bedrock/latest/userguide/model-parameters-titan-text.html)
+ **stopSequences** – 指定字符序列以指示模型应在何处停止。

------
#### [ InvokeModel Response ]

```
{
    "inputTextTokenCount": int,
    "results": [{
        "tokenCount": int,
        "outputText": "\n<response>\n",
        "completionReason": "string"
    }]
}
```

响应正文包含以下字段：
+ **inputTextTokenCount** – 提示中的词元数量。
+ **results** – 由一个项目组成的数组，一个包含以下字段的对象：
  + **tokenCount** – 响应中的词元数量。
  + **outputText** – 响应中的文本。
  + **completionReason** – 响应结束生成操作的原因。可能有以下原因；
    + FINISHED – 响应已完全生成。
    + LENGTH – 由于您设置的响应长度，响应被截断。
    + STOP\$1CRITERIA\$1MET – 由于达到停止标准，响应被截断。
    + RAG\$1QUERY\$1WHEN\$1RAG\$1DISABLED – 该功能已禁用，无法完成查询。
    + CONTENT\$1FILTERED – 所应用的内容筛选条件筛选或删除了内容。

------
#### [ InvokeModelWithResponseStream Response ]

响应流正文中的每个文本块的格式如下。您必须对 `bytes` 字段进行解码（有关示例，请参阅 [使用以下命令提交单个提示 InvokeModel](inference-invoke.md)）。

```
{
    "chunk": {
        "bytes": b'{
            "index": int,
            "inputTextTokenCount": int,
            "totalOutputTextTokenCount": int,
            "outputText": "<response-chunk>",
            "completionReason": "string"
        }'
    }
}
```
+ **index** – 流式传输响应中数据块的索引。
+ **inputTextTokenCount** – 提示中的词元数量。
+ **totalOutputTextTokenCount** – 响应中的词元数量。
+ **outputText** – 响应中的文本。
+ **completionReason** – 响应结束生成操作的原因。可能有以下原因。
  + FINISHED – 响应已完全生成。
  + LENGTH – 由于您设置的响应长度，响应被截断。
  + STOP\$1CRITERIA\$1MET – 由于达到停止标准，响应被截断。
  + RAG\$1QUERY\$1WHEN\$1RAG\$1DISABLED – 该功能已禁用，无法完成查询。
  + CONTENT\$1FILTERED – 所应用的筛选条件筛选或删除了内容。

------

## 代码示例
<a name="inference-titan-code"></a>

以下示例展示了如何使用 Python SDK 对 Amazon Titan Text Premier 模型运行推理。

```
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
# SPDX-License-Identifier: Apache-2.0
"""
Shows how to create a list of action items from a meeting transcript
with the Amazon Titan Text model (on demand).
"""
import json
import logging
import boto3

from botocore.exceptions import ClientError


class ImageError(Exception):
    "Custom exception for errors returned by Amazon Titan Text models"

    def __init__(self, message):
        self.message = message


logger = logging.getLogger(__name__)
logging.basicConfig(level=logging.INFO)


def generate_text(model_id, body):
    """
    Generate text using Amazon Titan Text models on demand.
    Args:
        model_id (str): The model ID to use.
        body (str) : The request body to use.
    Returns:
        response (json): The response from the model.
    """

    logger.info(
        "Generating text with Amazon Titan Text model %s", model_id)

    bedrock = boto3.client(service_name='bedrock-runtime')

    accept = "application/json"
    content_type = "application/json"

    response = bedrock.invoke_model(
        body=body, modelId=model_id, accept=accept, contentType=content_type
    )
    response_body = json.loads(response.get("body").read())

    finish_reason = response_body.get("error")

    if finish_reason is not None:
        raise ImageError(f"Text generation error. Error is {finish_reason}")

    logger.info(
        "Successfully generated text with Amazon Titan Text model %s", model_id)

    return response_body


def main():
    """
    Entrypoint for Amazon Titan Text model example.
    """
    try:
        logging.basicConfig(level=logging.INFO,
                            format="%(levelname)s: %(message)s")

        # You can replace the model_id with any other Titan Text Models
        # Titan Text Model family model_id is as mentioned below:
        # amazon.titan-text-premier-v1:0, amazon.titan-text-express-v1, amazon.titan-text-lite-v1
        model_id = 'amazon.titan-text-premier-v1:0'

        prompt = """Meeting transcript: Miguel: Hi Brant, I want to discuss the workstream  
            for our new product launch Brant: Sure Miguel, is there anything in particular you want
            to discuss? Miguel: Yes, I want to talk about how users enter into the product.
            Brant: Ok, in that case let me add in Namita. Namita: Hey everyone 
            Brant: Hi Namita, Miguel wants to discuss how users enter into the product.
            Miguel: its too complicated and we should remove friction.  
            for example, why do I need to fill out additional forms?  
            I also find it difficult to find where to access the product
            when I first land on the landing page. Brant: I would also add that
            I think there are too many steps. Namita: Ok, I can work on the
            landing page to make the product more discoverable but brant
            can you work on the additonal forms? Brant: Yes but I would need 
            to work with James from another team as he needs to unblock the sign up workflow.
            Miguel can you document any other concerns so that I can discuss with James only once?
            Miguel: Sure.
            From the meeting transcript above, Create a list of action items for each person. """

        body = json.dumps({
            "inputText": prompt,
            "textGenerationConfig": {
                "maxTokenCount": 3072,
                "stopSequences": [],
                "temperature": 0.7,
                "topP": 0.9
            }
        })

        response_body = generate_text(model_id, body)
        print(f"Input token count: {response_body['inputTextTokenCount']}")

        for result in response_body['results']:
            print(f"Token count: {result['tokenCount']}")
            print(f"Output text: {result['outputText']}")
            print(f"Completion reason: {result['completionReason']}")

    except ClientError as err:
        message = err.response["Error"]["Message"]
        logger.error("A client error occurred: %s", message)
        print("A client error occured: " +
              format(message))
    except ImageError as err:
        logger.error(err.message)
        print(err.message)

    else:
        print(
            f"Finished generating text with the Amazon Titan Text Premier model {model_id}.")


if __name__ == "__main__":
    main()
```

以下示例展示了如何使用 Python SDK 对 Amazon Titan Text G1 - Express 模型运行推理。

```
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
# SPDX-License-Identifier: Apache-2.0
"""
Shows how to create a list of action items from a meeting transcript
with the Amazon &titan-text-express; model (on demand).
"""
import json
import logging
import boto3

from botocore.exceptions import ClientError


class ImageError(Exception):
    "Custom exception for errors returned by Amazon &titan-text-express; model"

    def __init__(self, message):
        self.message = message


logger = logging.getLogger(__name__)
logging.basicConfig(level=logging.INFO)


def generate_text(model_id, body):
    """
    Generate text using Amazon &titan-text-express; model on demand.
    Args:
        model_id (str): The model ID to use.
        body (str) : The request body to use.
    Returns:
        response (json): The response from the model.
    """

    logger.info(
        "Generating text with Amazon &titan-text-express; model %s", model_id)

    bedrock = boto3.client(service_name='bedrock-runtime')

    accept = "application/json"
    content_type = "application/json"

    response = bedrock.invoke_model(
        body=body, modelId=model_id, accept=accept, contentType=content_type
    )
    response_body = json.loads(response.get("body").read())

    finish_reason = response_body.get("error")

    if finish_reason is not None:
        raise ImageError(f"Text generation error. Error is {finish_reason}")

    logger.info(
        "Successfully generated text with Amazon &titan-text-express; model %s", model_id)

    return response_body


def main():
    """
    Entrypoint for Amazon &titan-text-express; example.
    """
    try:
        logging.basicConfig(level=logging.INFO,
                            format="%(levelname)s: %(message)s")

        model_id = 'amazon.titan-text-express-v1'

        prompt = """Meeting transcript: Miguel: Hi Brant, I want to discuss the workstream  
            for our new product launch Brant: Sure Miguel, is there anything in particular you want
            to discuss? Miguel: Yes, I want to talk about how users enter into the product.
            Brant: Ok, in that case let me add in Namita. Namita: Hey everyone 
            Brant: Hi Namita, Miguel wants to discuss how users enter into the product.
            Miguel: its too complicated and we should remove friction.  
            for example, why do I need to fill out additional forms?  
            I also find it difficult to find where to access the product
            when I first land on the landing page. Brant: I would also add that
            I think there are too many steps. Namita: Ok, I can work on the
            landing page to make the product more discoverable but brant
            can you work on the additonal forms? Brant: Yes but I would need 
            to work with James from another team as he needs to unblock the sign up workflow.
            Miguel can you document any other concerns so that I can discuss with James only once?
            Miguel: Sure.
            From the meeting transcript above, Create a list of action items for each person. """

        body = json.dumps({
            "inputText": prompt,
            "textGenerationConfig": {
                "maxTokenCount": 4096,
                "stopSequences": [],
                "temperature": 0,
                "topP": 1
            }
        })

        response_body = generate_text(model_id, body)
        print(f"Input token count: {response_body['inputTextTokenCount']}")

        for result in response_body['results']:
            print(f"Token count: {result['tokenCount']}")
            print(f"Output text: {result['outputText']}")
            print(f"Completion reason: {result['completionReason']}")

    except ClientError as err:
        message = err.response["Error"]["Message"]
        logger.error("A client error occurred: %s", message)
        print("A client error occured: " +
              format(message))
    except ImageError as err:
        logger.error(err.message)
        print(err.message)

    else:
        print(
            f"Finished generating text with the Amazon &titan-text-express; model {model_id}.")


if __name__ == "__main__":
    main()
```

# Amazon Titan 图像生成器 G1 模型
<a name="model-parameters-titan-image"></a>

执行模型推理时，Amazon Titan Image Generator G1 V1 和 Titan Image Generator G1 V2 模型支持以下推理参数和模型响应。

**Topics**
+ [推理参数](#model-parameters-titan-image-api)
+ [示例](#model-parameters-titan-image-code-examples)

## 推理参数
<a name="model-parameters-titan-image-api"></a>

使用 Amazon Titan 图像生成器模型调用 [InvokeModel](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InvokeModel.html) 时，请将请求的 `body` 字段替换为与您的使用案例匹配的格式。所有任务共享一个 `imageGenerationConfig` 对象，但每个任务都有一个特定于该任务的参数对象。支持以下用例：


****  

| taskType | 任务参数字段 | 任务类型 | 定义 | 
| --- | --- | --- | --- | 
| TEXT\$1IMAGE | textToImageParams | 生成 |  使用文本提示生成图像。  | 
| TEXT\$1IMAGE | textToImageParams | 生成 |  （仅限 Image conditioning-V2）提供额外的输入调节图像以及文本提示，以生成与调节图像的布局和构图一致的图像。  | 
| INPAINTING | inPaintingParams | 编辑 |  通过更改*掩膜*内部以匹配周围背景来修改图像。  | 
| OUTPAINTING | outPaintingParams | 编辑 | 通过无缝扩展掩膜定义的区域来修改图像。 | 
| IMAGE\$1VARIATION | imageVariationParams | 编辑 | 通过生成原始图像的变体来修改图像。 | 
| COLOR\$1GUIDED\$1GENERATION (V2 only) | colorGuidedGenerationParams | 生成 | 提供十六进制颜色代码列表和文本提示，以生成符合调色板的图像。 | 
| BACKGROUND\$1REMOVAL (V2 only) | backgroundRemovalParams | 编辑 | 通过识别多个对象并移除背景来修改图像，输出具有透明背景的图像。 | 

编辑任务需要在输入中使用一个 `image` 字段。该字段由一个字符串组成，用于定义图像中的像素。每个像素由 3 个 RGB 通道定义，每个通道的范围为 0 到 255（例如，(255 255 0) 表示黄色）。这些通道以 base64 编码。

您使用的图像必须为 JPEG 或 PNG 格式。

如果要进行补绘或扩绘，则还需定义一个*掩膜*，即用于定义要修改的图像部分的一个或多个区域。可以通过以下两种方式之一定义掩膜：
+ `maskPrompt` – 编写一个文本提示来描述图像中要遮盖的部分。
+ `maskImage` – 输入一个 base64 编码的字符串，通过将输入图像中的每个像素标记为（0 0 0）或（255 255 255）来定义掩膜覆盖区域。
  + 定义为 (0 0 0) 的像素是掩膜内的像素。
  + 定义为 (255 255 255) 的像素是掩膜外的像素。

  可以使用照片编辑工具来绘制掩膜。然后，可以将输出的 JPEG 或 PNG 图像转换为 base64 编码，然后输入到该字段中。否则，请改用 `maskPrompt` 字段允许模型推断掩膜。

选择一个选项卡，查看不同图像生成用例的 API 请求正文和字段说明。

------
#### [ Text-to-image generation (Request) ]

用于生成图像的文本提示必须 <= 512 个字符。长边的分辨率 <= 1,408。negativeText（可选）– 文本提示，用于定义图像内不包含的 <= 512 个字符的内容。请参阅下表，查看有关分辨率的完整列表。

```
{
    "taskType": "TEXT_IMAGE",
    "textToImageParams": {
        "text": "string",      
        "negativeText": "string"
    },
    "imageGenerationConfig": {
        "quality": "standard" | "premium",
        "numberOfImages": int,
        "height": int,
        "width": int,
        "cfgScale": float,
        "seed": int
    }
}
```

`textToImageParams` 字段如下所述。
+ **text**（必要）– 用于生成图像的文本提示。
+ **negativeText**（可选）– 用于定义图像内不包含什么内容的文本提示。
**注意**  
请勿在 `negativeText` 提示中使用否定词。假如您不想在图像中包含镜像，请在 `negativeText` 提示中输入 **mirrors**，而不要输入 **no mirrors**。

------
#### [ Inpainting (Request) ]

text（可选）– 用于定义要在掩膜内更改什么内容的文本提示。如果未包含该字段，模型将尝试将整个掩膜区域替换为背景。必须 <= 512 个字符。negativeText（可选）– 用于定义图像内不包含什么内容的文本提示。必须 <= 512 个字符。在图像的长边，输入图像和输入掩码的大小限制为 <= 1,408。输出大小与输入大小相同。

```
{
    "taskType": "INPAINTING",
    "inPaintingParams": {
        "image": "base64-encoded string",                         
        "text": "string",
        "negativeText": "string",        
        "maskPrompt": "string",                      
        "maskImage": "base64-encoded string",   
        "returnMask": boolean # False by default                
    },                                                 
    "imageGenerationConfig": {
        "quality": "standard" | "premium",
        "numberOfImages": int,
        "height": int,
        "width": int,
        "cfgScale": float
    }
}
```

`inPaintingParams` 字段如下所述。*掩膜*用于定义您要修改的图像部分。
+ **image**（必要）– 要修改的 JPEG 或 PNG 图像，格式化为指定像素序列的字符串，每个像素以 RGB 值定义并以 base64 编码。有关如何将图像编码为 base64 以及如何对 base64 编码的字符串进行解码并将其转换为图像的示例，请参阅[代码示例](#model-parameters-titan-image-code-examples)。
+ 必须定义以下字段之一（但不能同时定义两个字段）才能定义。
  + **maskPrompt** – 用于定义掩膜的文本提示。
  + **maskImage** – 一个字符串，通过指定与 `image` 大小相同的像素序列来定义掩膜。每个像素的 RGB 值都将变为 (0 0 0)（掩膜内的像素）或 (255 255 255)（掩膜外的像素）。有关如何将图像编码为 base64 以及如何对 base64 编码的字符串进行解码并将其转换为图像的示例，请参阅[代码示例](#model-parameters-titan-image-code-examples)。
+ **text**（可选）– 用于定义要在掩膜内更改什么内容的文本提示。如果未包含该字段，模型将尝试将整个掩膜区域替换为背景。
+ **negativeText**（可选）– 用于定义图像内不包含什么内容的文本提示。
**注意**  
请勿在 `negativeText` 提示中使用否定词。假如您不想在图像中包含镜像，请在 `negativeText` 提示中输入 **mirrors**，而不要输入 **no mirrors**。

------
#### [ Outpainting (Request) ]

text（必要）– 用于定义要在掩膜外更改什么内容的文本提示。必须 <= 512 个字符。negativeText（可选）– 用于定义图像内不包含什么内容的文本提示。必须 <= 512 个字符。在图像的长边，输入图像和输入掩码的大小限制为 <= 1,408。输出大小与输入大小相同。

```
{
    "taskType": "OUTPAINTING",
    "outPaintingParams": {
        "text": "string",
        "negativeText": "string",        
        "image": "base64-encoded string",                         
        "maskPrompt": "string",                      
        "maskImage": "base64-encoded string",    
        "returnMask": boolean, # False by default                                         
        "outPaintingMode": "DEFAULT | PRECISE"                 
    },                                                 
    "imageGenerationConfig": {
        "quality": "standard" | "premium",
        "numberOfImages": int,
        "height": int,
        "width": int,
        "cfgScale": float
    }
}
```

这些 `outPaintingParams` 字段的定义如下。*掩膜*用于定义图像中您不想修改的区域。生成操作将无缝扩展您定义的区域。
+ **image**（必要）– 要修改的 JPEG 或 PNG 图像，格式化为指定像素序列的字符串，每个像素以 RGB 值定义并以 base64 编码。有关如何将图像编码为 base64 以及如何对 base64 编码的字符串进行解码并将其转换为图像的示例，请参阅[代码示例](#model-parameters-titan-image-code-examples)。
+ 必须定义以下字段之一（但不能同时定义两个字段）才能定义。
  + **maskPrompt** – 用于定义掩膜的文本提示。
  + **maskImage** – 一个字符串，通过指定与 `image` 大小相同的像素序列来定义掩膜。每个像素的 RGB 值都将变为 (0 0 0)（掩膜内的像素）或 (255 255 255)（掩膜外的像素）。有关如何将图像编码为 base64 以及如何对 base64 编码的字符串进行解码并将其转换为图像的示例，请参阅[代码示例](#model-parameters-titan-image-code-examples)。
+ **text**（必要）– 用于定义要在掩膜外更改什么内容的文本提示。
+ **negativeText**（可选）– 用于定义图像内不包含什么内容的文本提示。
**注意**  
请勿在 `negativeText` 提示中使用否定词。假如您不想在图像中包含镜像，请在 `negativeText` 提示中输入 **mirrors**，而不要输入 **no mirrors**。
+ **outPaintingMode** – 指定是否允许修改掩膜内的像素。以下是可能的值：
  + DEFAULT - 使用此选项来允许修改掩膜内的图像，使其与重建的背景保持一致。
  + PRECISE – 使用此选项来防止修改掩膜内的图像。

------
#### [ Image variation (Request) ]

图像变体让您可以根据参数值创建原始图像的变体。在图像的长边，输入图像的大小限制为 <= 1,408。请参阅下表，查看有关分辨率的完整列表。
+ text（可选）– 定义图像中要保留什么内容、要更改什么内容的文本提示。必须 <= 512 个字符。
+ negativeText（可选）– 用于定义图像内不包含什么内容的文本提示。必须 <= 512 个字符。
+ text（可选）– 定义图像中要保留什么内容、要更改什么内容的文本提示。必须 <= 512 个字符。
+ similarityStrength（可选）- 指定生成的图像与输入图像的相似程度。使用较低的值可在生成中引入更多的随机性。可接受的范围介于 0.2 和 1.0 之间（包括 0.2 和 1.0），如果请求中缺少此参数，则使用默认值 0.7。

```
{
     "taskType": "IMAGE_VARIATION",
     "imageVariationParams": {
         "text": "string",
         "negativeText": "string",
         "images": ["base64-encoded string"],
         "similarityStrength": 0.7,  # Range: 0.2 to 1.0
     },
     "imageGenerationConfig": {
         "quality": "standard" | "premium",
         "numberOfImages": int,
         "height": int,
         "width": int,
         "cfgScale": float
     }
}
```

这些 `imageVariationParams` 字段的定义如下。
+ **images**（必要）– 要生成变体的图片列表。您可以包含 1 到 5 张图像。图像被定义为 base64 编码的图像字符串。有关如何将图像编码为 base64 以及如何对 base64 编码的字符串进行解码并将其转换为图像的示例，请参阅[代码示例](#model-parameters-titan-image-code-examples)。
+ **text**（可选）– 定义图像中要保留什么内容、要更改什么内容的文本提示。
+ **similarityStrength**（可选）- 指定生成的图像与输入图像的相似程度。范围介于 0.2 到 1.0 之间，使用较低的值可引入更多的随机性。
+ **negativeText**（可选）– 用于定义图像内不包含什么内容的文本提示。
**注意**  
请勿在 `negativeText` 提示中使用否定词。假如您不想在图像中包含镜像，请在 `negativeText` 提示中输入 **mirrors**，而不要输入 **no mirrors**。

------
#### [ Conditioned Image Generation (Request) V2 only ]

条件图像生成任务类型让客户可以通过提供“条件图像”来增强文本到图像的生成，从而对生成的图像进行更精细的控制。
+ Canny 边缘检测
+ 分割映射

用于生成图像的文本提示必须 <= 512 个字符。长边的分辨率 <= 1,408。negativeText（可选）是一个文本提示，用于定义图像内不包含的 <= 512 个字符的内容。请参阅下表，查看有关分辨率的完整列表。

```
{
    "taskType": "TEXT_IMAGE",
    "textToImageParams": {
        "text": "string",      
        "negativeText": "string",
        "conditionImage": "base64-encoded string", # [OPTIONAL] base64 encoded image
        "controlMode": "string", # [OPTIONAL] CANNY_EDGE | SEGMENTATION. DEFAULT: CANNY_EDGE
        "controlStrength": float # [OPTIONAL] weight given to the condition image. DEFAULT: 0.7
    },
    "imageGenerationConfig": {
        "quality": "standard" | "premium",
        "numberOfImages": int,
        "height": int,
        "width": int,
        "cfgScale": float,
        "seed": int
    }
}
```
+ **text**（必要）– 用于生成图像的文本提示。
+ **negativeText**（可选）– 用于定义图像内不包含什么内容的文本提示。
**注意**  
请勿在 `negativeText` 提示中使用否定词。假如您不想在图像中包含镜像，请在 `negativeText` 提示中输入 **mirrors**，而不要输入 **no mirrors**。
+ **conditionImage**（仅限 Optional-V2）用于指导生成图像的布局和构图的单个输入调节图像。图像被定义为 base64 编码的图像字符串。有关如何将图像编码为 base64 以及如何对 base64 编码的字符串进行解码并将其转换为图像的示例。
+ **controlMode**（仅限 Optional-V2）– 指定应使用的调节模式类型。支持两种类型的调节模式：CANNY\$1EDGE 和 SEGMENTATION。默认值为 CANNY\$1EDGE。
+ **controlStrength**（仅限 Optional-V2 only）– 指定生成图像的布局和构图与 conditioningImage 的相似程度。范围介于 0 到 1.0 之间，使用较低的值可引入更多的随机性。默认值为 0.7。

**注意**  
如果提供了 controlMode 或 controlStrength，则还必须提供 conditionImage。

------
#### [ Color Guided Content (Request) V2 only ]

提供十六进制颜色代码列表和文本提示，以生成符合调色板的图像。生成图像所需的文本提示必须 <= 512 个字符。长边的最大分辨率为 1,408。需要 1 到 10 个十六进制颜色代码列表来指定生成图像中的颜色。negativeText（可选）是一个文本提示，用于定义图像内不包含的 <= 512 个字符的内容。referenceImage（可选）是一个额外的参考图像，用于指导生成图像中的调色板。用户上传的 RGB 参考图像的长边大小限制为 <= 1,408。

```
{
    "taskType": "COLOR_GUIDED_GENERATION",
    "colorGuidedGenerationParams": {
        "text": "string",      
        "negativeText": "string",
        "referenceImage" "base64-encoded string", # [OPTIONAL]
        "colors": ["string"] # list of color hex codes
    },
    "imageGenerationConfig": {
        "quality": "standard" | "premium",
        "numberOfImages": int,
        "height": int,
        "width": int,
        "cfgScale": float,
        "seed": int
    }
}
```

colorGuidedGenerationParams 字段描述如下。请注意，此参数仅适用于 V2。
+ **text**（必要）– 用于生成图像的文本提示。
+ **colors**（必要）– 最多 10 个十六进制颜色代码的列表，用于指定生成图像中的颜色。
+ **negativeText**（可选）– 用于定义图像内不包含什么内容的文本提示。
**注意**  
请勿在 `negativeText` 提示中使用否定词。假如您不想在图像中包含镜像，请在 `negativeText` 提示中输入 **mirrors**，而不要输入 **no mirrors**。
+ **referenceImage**（可选）– 单个输入参考图像，用于指导生成图像的调色板。图像被定义为 base64 编码的图像字符串。

------
#### [ Background Removal (Request) ]

背景移除任务类型可自动识别输入图像中的多个对象并移除背景。输出图像的背景是透明的。

**请求格式**

```
{
    "taskType": "BACKGROUND_REMOVAL",
    "backgroundRemovalParams": {
        "image": "base64-encoded string"
    }
}
```

**响应格式**

```
{
  "images": [
    "base64-encoded string", 
    ...
  ],
  "error": "string" 
}
```

The backgroundRemovalParams 字段描述如下。
+ **image**（必要）– 要修改的 JPEG 或 PNG 图像，格式化为指定像素序列的字符串，每个像素以 RGB 值定义并以 base64 编码。

------
#### [ Response body ]

```
{
  "images": [
    "base64-encoded string", 
    ...
  ],
  "error": "string" 
}
```

响应正文是一个包含以下字段之一的流式传输对象。
+ `images` – 如果请求成功，它将返回此字段，即一个 base64 编码的字符串列表，每个字符串都定义一个生成的图像。每个图像都被格式化为一个指定像素序列的字符串，其中每个像素都用 RGB 值定义并以 base64 编码。有关如何将图像编码为 base64 以及如何对 base64 编码的字符串进行解码并将其转换为图像的示例，请参阅[代码示例](#model-parameters-titan-image-code-examples)。
+ `error` – 如果请求在以下任一情况下违反了内容审核政策，则将在此字段中返回一条消息。
  + 如果内容审核政策标记了输入的文本、图像或掩膜图片。
  + 如果内容审核政策至少标记了一张输出图片

------

共享且可选的 `imageGenerationConfig` 包含以下字段。如果不包含此对象，将使用默认配置。
+ **quality** - 图像的质量。默认值为 `standard`。有关定价详细信息，请参阅 [Amazon Bedrock 定价](https://aws.amazon.com/bedrock/pricing/)。
+ **numberOfImages**（可选）– 要生成的图像数量。  
****    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/zh_cn/bedrock/latest/userguide/model-parameters-titan-image.html)
+ **cfgScale**（可选）– 指定生成的图像应在多大程度上遵守提示。使用较低的值可在生成中引入更多的随机性。  
****    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/zh_cn/bedrock/latest/userguide/model-parameters-titan-image.html)
+ 以下参数定义了您想要输出图像的大小。有关按图像大小定价的更多详细信息，请参阅 [Amazon Bedrock 定价](https://aws.amazon.com/bedrock/pricing/)。
  + **height**（可选）– 图像的高度（以像素为单位）。默认值是 1408。
  + **width**（可选）– 图像的宽度（以像素为单位）。默认值是 1408。

  允许使用以下尺寸。  
****    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/zh_cn/bedrock/latest/userguide/model-parameters-titan-image.html)
+ **seed**（可选）– 用于控制和重现结果。决定初始噪声设置。使用与上一次运行时相同的种子和相同的设置，以使推理可以创建相似的图像。  
****    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/zh_cn/bedrock/latest/userguide/model-parameters-titan-image.html)

## 示例
<a name="model-parameters-titan-image-code-examples"></a>

以下示例展示了如何在 Python SDK 中使用按需吞吐量调用 Amazon Titan 图像生成器模型。选择一个选项卡查看每个用例的示例。每个示例最后都显示了图像。

------
#### [ Text-to-image generation ]

```
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
# SPDX-License-Identifier: Apache-2.0
"""
Shows how to generate an image from a text prompt with the Amazon Titan Image Generator G1 model (on demand).
"""
import base64
import io
import json
import logging
import boto3
from PIL import Image

from botocore.exceptions import ClientError


class ImageError(Exception):
    "Custom exception for errors returned by Amazon Titan Image Generator G1"

    def __init__(self, message):
        self.message = message


logger = logging.getLogger(__name__)
logging.basicConfig(level=logging.INFO)


def generate_image(model_id, body):
    """
    Generate an image using Amazon Titan Image Generator G1 model on demand.
    Args:
        model_id (str): The model ID to use.
        body (str) : The request body to use.
    Returns:
        image_bytes (bytes): The image generated by the model.
    """

    logger.info(
        "Generating image with Amazon Titan Image Generator G1 model %s", model_id)

    bedrock = boto3.client(service_name='bedrock-runtime')

    accept = "application/json"
    content_type = "application/json"

    response = bedrock.invoke_model(
        body=body, modelId=model_id, accept=accept, contentType=content_type
    )
    response_body = json.loads(response.get("body").read())

    base64_image = response_body.get("images")[0]
    base64_bytes = base64_image.encode('ascii')
    image_bytes = base64.b64decode(base64_bytes)

    finish_reason = response_body.get("error")

    if finish_reason is not None:
        raise ImageError(f"Image generation error. Error is {finish_reason}")

    logger.info(
        "Successfully generated image with Amazon Titan Image Generator G1 model %s", model_id)

    return image_bytes


def main():
    """
    Entrypoint for Amazon Titan Image Generator G1 example.
    """

    logging.basicConfig(level=logging.INFO,
                        format="%(levelname)s: %(message)s")

    model_id = 'amazon.titan-image-generator-v1'

    prompt = """A photograph of a cup of coffee from the side."""

    body = json.dumps({
        "taskType": "TEXT_IMAGE",
        "textToImageParams": {
            "text": prompt
        },
        "imageGenerationConfig": {
            "numberOfImages": 1,
            "height": 1024,
            "width": 1024,
            "cfgScale": 8.0,
            "seed": 0
        }
    })

    try:
        image_bytes = generate_image(model_id=model_id,
                                     body=body)
        image = Image.open(io.BytesIO(image_bytes))
        image.show()

    except ClientError as err:
        message = err.response["Error"]["Message"]
        logger.error("A client error occurred: %s", message)
        print("A client error occured: " +
              format(message))
    except ImageError as err:
        logger.error(err.message)
        print(err.message)

    else:
        print(
            f"Finished generating image with Amazon Titan Image Generator G1 model {model_id}.")


if __name__ == "__main__":
    main()
```

------
#### [ Inpainting ]

```
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
# SPDX-License-Identifier: Apache-2.0
"""
Shows how to use inpainting to generate an image from a source image with 
the Amazon Titan Image Generator G1 model (on demand).
The example uses a mask prompt to specify the area to inpaint.
"""
import base64
import io
import json
import logging
import boto3
from PIL import Image

from botocore.exceptions import ClientError


class ImageError(Exception):
    "Custom exception for errors returned by Amazon Titan Image Generator G1"

    def __init__(self, message):
        self.message = message


logger = logging.getLogger(__name__)
logging.basicConfig(level=logging.INFO)


def generate_image(model_id, body):
    """
    Generate an image using Amazon Titan Image Generator G1 model on demand.
    Args:
        model_id (str): The model ID to use.
        body (str) : The request body to use.
    Returns:
        image_bytes (bytes): The image generated by the model.
    """

    logger.info(
        "Generating image with Amazon Titan Image Generator G1 model %s", model_id)

    bedrock = boto3.client(service_name='bedrock-runtime')

    accept = "application/json"
    content_type = "application/json"

    response = bedrock.invoke_model(
        body=body, modelId=model_id, accept=accept, contentType=content_type
    )
    response_body = json.loads(response.get("body").read())

    base64_image = response_body.get("images")[0]
    base64_bytes = base64_image.encode('ascii')
    image_bytes = base64.b64decode(base64_bytes)

    finish_reason = response_body.get("error")

    if finish_reason is not None:
        raise ImageError(f"Image generation error. Error is {finish_reason}")

    logger.info(
        "Successfully generated image with Amazon Titan Image Generator G1 model %s", model_id)

    return image_bytes


def main():
    """
    Entrypoint for Amazon Titan Image Generator G1 example.
    """
    try:
        logging.basicConfig(level=logging.INFO,
                            format="%(levelname)s: %(message)s")

        model_id = 'amazon.titan-image-generator-v1'

        # Read image from file and encode it as base64 string.
        with open("/path/to/image", "rb") as image_file:
            input_image = base64.b64encode(image_file.read()).decode('utf8')

        body = json.dumps({
            "taskType": "INPAINTING",
            "inPaintingParams": {
                "text": "Modernize the windows of the house",
                "negativeText": "bad quality, low res",
                "image": input_image,
                "maskPrompt": "windows"
            },
            "imageGenerationConfig": {
                "numberOfImages": 1,
                "height": 512,
                "width": 512,
                "cfgScale": 8.0
            }
        })

        image_bytes = generate_image(model_id=model_id,
                                     body=body)
        image = Image.open(io.BytesIO(image_bytes))
        image.show()

    except ClientError as err:
        message = err.response["Error"]["Message"]
        logger.error("A client error occurred: %s", message)
        print("A client error occured: " +
              format(message))
    except ImageError as err:
        logger.error(err.message)
        print(err.message)

    else:
        print(
            f"Finished generating image with Amazon Titan Image Generator G1 model {model_id}.")


if __name__ == "__main__":
    main()
```

------
#### [ Outpainting ]

```
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
# SPDX-License-Identifier: Apache-2.0
"""
Shows how to use outpainting to generate an image from a source image with 
the Amazon Titan Image Generator G1 model (on demand).
The example uses a mask image to outpaint the original image.
"""
import base64
import io
import json
import logging
import boto3
from PIL import Image

from botocore.exceptions import ClientError


class ImageError(Exception):
    "Custom exception for errors returned by Amazon Titan Image Generator G1"

    def __init__(self, message):
        self.message = message


logger = logging.getLogger(__name__)
logging.basicConfig(level=logging.INFO)


def generate_image(model_id, body):
    """
    Generate an image using Amazon Titan Image Generator G1 model on demand.
    Args:
        model_id (str): The model ID to use.
        body (str) : The request body to use.
    Returns:
        image_bytes (bytes): The image generated by the model.
    """

    logger.info(
        "Generating image with Amazon Titan Image Generator G1 model %s", model_id)

    bedrock = boto3.client(service_name='bedrock-runtime')

    accept = "application/json"
    content_type = "application/json"

    response = bedrock.invoke_model(
        body=body, modelId=model_id, accept=accept, contentType=content_type
    )
    response_body = json.loads(response.get("body").read())

    base64_image = response_body.get("images")[0]
    base64_bytes = base64_image.encode('ascii')
    image_bytes = base64.b64decode(base64_bytes)

    finish_reason = response_body.get("error")

    if finish_reason is not None:
        raise ImageError(f"Image generation error. Error is {finish_reason}")

    logger.info(
        "Successfully generated image with Amazon Titan Image Generator G1 model %s", model_id)

    return image_bytes


def main():
    """
    Entrypoint for Amazon Titan Image Generator G1 example.
    """
    try:
        logging.basicConfig(level=logging.INFO,
                            format="%(levelname)s: %(message)s")

        model_id = 'amazon.titan-image-generator-v1'

        # Read image and mask image from file and encode as base64 strings.
        with open("/path/to/image", "rb") as image_file:
            input_image = base64.b64encode(image_file.read()).decode('utf8')
        with open("/path/to/mask_image", "rb") as mask_image_file:
            input_mask_image = base64.b64encode(
                mask_image_file.read()).decode('utf8')

        body = json.dumps({
            "taskType": "OUTPAINTING",
            "outPaintingParams": {
                "text": "Draw a chocolate chip cookie",
                "negativeText": "bad quality, low res",
                "image": input_image,
                "maskImage": input_mask_image,
                "outPaintingMode": "DEFAULT"
            },
            "imageGenerationConfig": {
                "numberOfImages": 1,
                "height": 512,
                "width": 512,
                "cfgScale": 8.0
            }
        }
        )

        image_bytes = generate_image(model_id=model_id,
                                     body=body)
        image = Image.open(io.BytesIO(image_bytes))
        image.show()

    except ClientError as err:
        message = err.response["Error"]["Message"]
        logger.error("A client error occurred: %s", message)
        print("A client error occured: " +
              format(message))
    except ImageError as err:
        logger.error(err.message)
        print(err.message)

    else:
        print(
            f"Finished generating image with Amazon Titan Image Generator G1 model {model_id}.")


if __name__ == "__main__":
    main()
```

------
#### [ Image variation ]

```
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
# SPDX-License-Identifier: Apache-2.0
"""
Shows how to generate an image variation from a source image with the
Amazon Titan Image Generator G1 model (on demand).
"""
import base64
import io
import json
import logging
import boto3
from PIL import Image

from botocore.exceptions import ClientError


class ImageError(Exception):
    "Custom exception for errors returned by Amazon Titan Image Generator G1"

    def __init__(self, message):
        self.message = message


logger = logging.getLogger(__name__)
logging.basicConfig(level=logging.INFO)


def generate_image(model_id, body):
    """
    Generate an image using Amazon Titan Image Generator G1 model on demand.
    Args:
        model_id (str): The model ID to use.
        body (str) : The request body to use.
    Returns:
        image_bytes (bytes): The image generated by the model.
    """

    logger.info(
        "Generating image with Amazon Titan Image Generator G1 model %s", model_id)

    bedrock = boto3.client(service_name='bedrock-runtime')

    accept = "application/json"
    content_type = "application/json"

    response = bedrock.invoke_model(
        body=body, modelId=model_id, accept=accept, contentType=content_type
    )
    response_body = json.loads(response.get("body").read())

    base64_image = response_body.get("images")[0]
    base64_bytes = base64_image.encode('ascii')
    image_bytes = base64.b64decode(base64_bytes)

    finish_reason = response_body.get("error")

    if finish_reason is not None:
        raise ImageError(f"Image generation error. Error is {finish_reason}")

    logger.info(
        "Successfully generated image with Amazon Titan Image Generator G1 model %s", model_id)

    return image_bytes


def main():
    """
    Entrypoint for Amazon Titan Image Generator G1 example.
    """
    try:
        logging.basicConfig(level=logging.INFO,
                            format="%(levelname)s: %(message)s")

        model_id = 'amazon.titan-image-generator-v1'

        # Read image from file and encode it as base64 string.
        with open("/path/to/image", "rb") as image_file:
            input_image = base64.b64encode(image_file.read()).decode('utf8')

        body = json.dumps({
            "taskType": "IMAGE_VARIATION",
            "imageVariationParams": {
                "text": "Modernize the house, photo-realistic, 8k, hdr",
                "negativeText": "bad quality, low resolution, cartoon",
                "images": [input_image],
		"similarityStrength": 0.7,  # Range: 0.2 to 1.0
            },
            "imageGenerationConfig": {
                "numberOfImages": 1,
                "height": 512,
                "width": 512,
                "cfgScale": 8.0
            }
        })

        image_bytes = generate_image(model_id=model_id,
                                     body=body)
        image = Image.open(io.BytesIO(image_bytes))
        image.show()

    except ClientError as err:
        message = err.response["Error"]["Message"]
        logger.error("A client error occurred: %s", message)
        print("A client error occured: " +
              format(message))
    except ImageError as err:
        logger.error(err.message)
        print(err.message)

    else:
        print(
            f"Finished generating image with Amazon Titan Image Generator G1 model {model_id}.")


if __name__ == "__main__":
    main()
```

------
#### [ Image conditioning (V2 only) ]

```
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
# SPDX-License-Identifier: Apache-2.0
"""
Shows how to generate image conditioning from a source image with the
Amazon Titan Image Generator G1 V2 model (on demand).
"""
import base64
import io
import json
import logging
import boto3
from PIL import Image

from botocore.exceptions import ClientError


class ImageError(Exception):
    "Custom exception for errors returned by Amazon Titan Image Generator V2"

    def __init__(self, message):
        self.message = message


logger = logging.getLogger(__name__)
logging.basicConfig(level=logging.INFO)


def generate_image(model_id, body):
    """
    Generate an image using Amazon Titan Image Generator V2 model on demand.
    Args:
        model_id (str): The model ID to use.
        body (str) : The request body to use.
    Returns:
        image_bytes (bytes): The image generated by the model.
    """

    logger.info(
        "Generating image with Amazon Titan Image Generator V2 model %s", model_id)

    bedrock = boto3.client(service_name='bedrock-runtime')

    accept = "application/json"
    content_type = "application/json"

    response = bedrock.invoke_model(
        body=body, modelId=model_id, accept=accept, contentType=content_type
    )
    response_body = json.loads(response.get("body").read())

    base64_image = response_body.get("images")[0]
    base64_bytes = base64_image.encode('ascii')
    image_bytes = base64.b64decode(base64_bytes)

    finish_reason = response_body.get("error")

    if finish_reason is not None:
        raise ImageError(f"Image generation error. Error is {finish_reason}")

    logger.info(
        "Successfully generated image with Amazon Titan Image Generator V2 model %s", model_id)

    return image_bytes


def main():
    """
    Entrypoint for Amazon Titan Image Generator V2 example.
    """
    try:
        logging.basicConfig(level=logging.INFO,
                            format="%(levelname)s: %(message)s")

        model_id = 'amazon.titan-image-generator-v2:0'

        # Read image from file and encode it as base64 string.
        with open("/path/to/image", "rb") as image_file:
            input_image = base64.b64encode(image_file.read()).decode('utf8')

        body = json.dumps({
            "taskType": "TEXT_IMAGE",
            "textToImageParams": {
                "text": "A robot playing soccer, anime cartoon style",
                "negativeText": "bad quality, low res",
                "conditionImage": input_image,
                "controlMode": "CANNY_EDGE"
            },
            "imageGenerationConfig": {
                "numberOfImages": 1,
                "height": 512,
                "width": 512,
                "cfgScale": 8.0
            }
        })

        image_bytes = generate_image(model_id=model_id,
                                     body=body)
        image = Image.open(io.BytesIO(image_bytes))
        image.show()

    except ClientError as err:
        message = err.response["Error"]["Message"]
        logger.error("A client error occurred: %s", message)
        print("A client error occured: " +
              format(message))
    except ImageError as err:
        logger.error(err.message)
        print(err.message)

    else:
        print(
            f"Finished generating image with Amazon Titan Image Generator V2 model {model_id}.")


if __name__ == "__main__":
    main()
```

------
#### [ Color guided content (V2 only) ]

```
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
# SPDX-License-Identifier: Apache-2.0
"""
Shows how to generate an image from a source image color palette with the
Amazon Titan Image Generator G1 V2 model (on demand).
"""
import base64
import io
import json
import logging
import boto3
from PIL import Image

from botocore.exceptions import ClientError


class ImageError(Exception):
    "Custom exception for errors returned by Amazon Titan Image Generator V2"

    def __init__(self, message):
        self.message = message


logger = logging.getLogger(__name__)
logging.basicConfig(level=logging.INFO)


def generate_image(model_id, body):
    """
    Generate an image using Amazon Titan Image Generator V2 model on demand.
    Args:
        model_id (str): The model ID to use.
        body (str) : The request body to use.
    Returns:
        image_bytes (bytes): The image generated by the model.
    """

    logger.info(
        "Generating image with Amazon Titan Image Generator V2 model %s", model_id)

    bedrock = boto3.client(service_name='bedrock-runtime')

    accept = "application/json"
    content_type = "application/json"

    response = bedrock.invoke_model(
        body=body, modelId=model_id, accept=accept, contentType=content_type
    )
    response_body = json.loads(response.get("body").read())

    base64_image = response_body.get("images")[0]
    base64_bytes = base64_image.encode('ascii')
    image_bytes = base64.b64decode(base64_bytes)

    finish_reason = response_body.get("error")

    if finish_reason is not None:
        raise ImageError(f"Image generation error. Error is {finish_reason}")

    logger.info(
        "Successfully generated image with Amazon Titan Image Generator V2 model %s", model_id)

    return image_bytes


def main():
    """
    Entrypoint for Amazon Titan Image Generator V2 example.
    """
    try:
        logging.basicConfig(level=logging.INFO,
                            format="%(levelname)s: %(message)s")

        model_id = 'amazon.titan-image-generator-v2:0'

        # Read image from file and encode it as base64 string.
        with open("/path/to/image", "rb") as image_file:
            input_image = base64.b64encode(image_file.read()).decode('utf8')

        body = json.dumps({
            "taskType": "COLOR_GUIDED_GENERATION",
            "colorGuidedGenerationParams": {
                "text": "digital painting of a girl, dreamy and ethereal, pink eyes, peaceful expression, ornate frilly dress, fantasy, intricate, elegant, rainbow bubbles, highly detailed, digital painting, artstation, concept art, smooth, sharp focus, illustration",
                "negativeText": "bad quality, low res",
                "referenceImage": input_image,
                "colors": ["#ff8080", "#ffb280", "#ffe680", "#ffe680"]
            },
            "imageGenerationConfig": {
                "numberOfImages": 1,
                "height": 512,
                "width": 512,
                "cfgScale": 8.0
            }
        })

        image_bytes = generate_image(model_id=model_id,
                                     body=body)
        image = Image.open(io.BytesIO(image_bytes))
        image.show()

    except ClientError as err:
        message = err.response["Error"]["Message"]
        logger.error("A client error occurred: %s", message)
        print("A client error occured: " +
              format(message))
    except ImageError as err:
        logger.error(err.message)
        print(err.message)

    else:
        print(
            f"Finished generating image with Amazon Titan Image Generator V2 model {model_id}.")


if __name__ == "__main__":
    main()
```

------
#### [ Background removal (V2 only) ]

```
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
# SPDX-License-Identifier: Apache-2.0
"""
Shows how to generate an image with background removal with the
Amazon Titan Image Generator G1 V2 model (on demand).
"""
import base64
import io
import json
import logging
import boto3
from PIL import Image

from botocore.exceptions import ClientError


class ImageError(Exception):
    "Custom exception for errors returned by Amazon Titan Image Generator V2"

    def __init__(self, message):
        self.message = message


logger = logging.getLogger(__name__)
logging.basicConfig(level=logging.INFO)


def generate_image(model_id, body):
    """
    Generate an image using Amazon Titan Image Generator V2 model on demand.
    Args:
        model_id (str): The model ID to use.
        body (str) : The request body to use.
    Returns:
        image_bytes (bytes): The image generated by the model.
    """

    logger.info(
        "Generating image with Amazon Titan Image Generator V2 model %s", model_id)

    bedrock = boto3.client(service_name='bedrock-runtime')

    accept = "application/json"
    content_type = "application/json"

    response = bedrock.invoke_model(
        body=body, modelId=model_id, accept=accept, contentType=content_type
    )
    response_body = json.loads(response.get("body").read())

    base64_image = response_body.get("images")[0]
    base64_bytes = base64_image.encode('ascii')
    image_bytes = base64.b64decode(base64_bytes)

    finish_reason = response_body.get("error")

    if finish_reason is not None:
        raise ImageError(f"Image generation error. Error is {finish_reason}")

    logger.info(
        "Successfully generated image with Amazon Titan Image Generator V2 model %s", model_id)

    return image_bytes


def main():
    """
    Entrypoint for Amazon Titan Image Generator V2 example.
    """
    try:
        logging.basicConfig(level=logging.INFO,
                            format="%(levelname)s: %(message)s")

        model_id = 'amazon.titan-image-generator-v2:0'

        # Read image from file and encode it as base64 string.
        with open("/path/to/image", "rb") as image_file:
            input_image = base64.b64encode(image_file.read()).decode('utf8')

        body = json.dumps({
            "taskType": "BACKGROUND_REMOVAL",
            "backgroundRemovalParams": {
                "image": input_image,
            }
        })

        image_bytes = generate_image(model_id=model_id,
                                     body=body)
        image = Image.open(io.BytesIO(image_bytes))
        image.show()

    except ClientError as err:
        message = err.response["Error"]["Message"]
        logger.error("A client error occurred: %s", message)
        print("A client error occured: " +
              format(message))
    except ImageError as err:
        logger.error(err.message)
        print(err.message)

    else:
        print(
            f"Finished generating image with Amazon Titan Image Generator V2 model {model_id}.")


if __name__ == "__main__":
    main()
```

------

# Amazon Titan Embeddings G1 - Text
<a name="model-parameters-titan-embed-text"></a>

Titan Embeddings G1 - Text 不支持使用推理参数。下文详细介绍了请求和响应格式，并提供了一个代码示例。

**Topics**
+ [请求和响应](#model-parameters-titan-embed-text-request-response)
+ [代码示例](#api-inference-examples-titan-embed-text)

## 请求和响应
<a name="model-parameters-titan-embed-text-request-response"></a>

请求正文在 [InvokeModel](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InvokeModel.html) 请求的 `body` 字段中传递。

------
#### [ V2 Request ]

inputText 为必需参数。normalize 和 dimensions 为可选参数。
+ inputText – 输入要转换为嵌入的文本。
+ normalize –（可选）标记，用于表示是否对输出嵌入进行规范化。默认值为 true。
+ dimensions –（可选）输出嵌入应具有的维度数。接受以下值：1024（默认）、512、256。
+ embeddingTypes –（可选）接受包含“float”、“binary”或同时包含两者的列表。默认值为 `float`。

```
{
    "inputText": string,
    "dimensions": int,
    "normalize": boolean,
    "embeddingTypes": list
}
```

------
#### [ V2 Response ]

字段如下所述。
+ embedding – 一个数组，表示您提供的输入的嵌入向量。其类型始终为 `float`。
+ inputTextTokenCount – 输入中的词元数量。
+ embeddingsByType – 嵌入列表的字典或映射。根据输入的不同会列出“float”、“binary”或同时列出两者。
  + 示例：`"embeddingsByType": {"binary": [int,..], "float": [float,...]}`
  + 该字段将始终显示。即使您没有在输入中指定 `embeddingTypes`，也会显示“float”。示例：`"embeddingsByType": {"float": [float,...]}`

```
{
    "embedding": [float, float, ...],
    "inputTextTokenCount": int,
    "embeddingsByType": {"binary": [int,..], "float": [float,...]}
}
```

------
#### [ G1 Request ]

唯一可用的字段是 `inputText`，您可以在其中输入要转换为嵌入的文本。

```
{
    "inputText": string
}
```

------
#### [ G1 Response ]

响应的 `body` 包含以下字段。

```
{
    "embedding": [float, float, ...],
    "inputTextTokenCount": int
}
```

字段如下所述。
+ **embedding** – 一个数组，表示您提供的输入的嵌入向量。
+ **inputTextTokenCount** – 输入中的词元数量。

------

## 代码示例
<a name="api-inference-examples-titan-embed-text"></a>

以下示例展示了如何调用 Amazon Titan 嵌入模型来生成嵌入。选择与您要使用的模型相对应的选项卡：

------
#### [ Amazon Titan Embeddings G1 - Text ]

```
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
# SPDX-License-Identifier: Apache-2.0
"""
Shows how to generate an embedding with the Amazon Titan Embeddings G1 - Text model (on demand).
"""

import json
import logging
import boto3


from botocore.exceptions import ClientError


logger = logging.getLogger(__name__)
logging.basicConfig(level=logging.INFO)


def generate_embedding(model_id, body):
    """
    Generate an embedding with the vector representation of a text input using Amazon Titan Embeddings G1 - Text on demand.
    Args:
        model_id (str): The model ID to use.
        body (str) : The request body to use.
    Returns:
        response (JSON): The embedding created by the model and the number of input tokens.
    """

    logger.info("Generating an embedding with Amazon Titan Embeddings G1 - Text model %s", model_id)

    bedrock = boto3.client(service_name='bedrock-runtime')

    accept = "application/json"
    content_type = "application/json"

    response = bedrock.invoke_model(
        body=body, modelId=model_id, accept=accept, contentType=content_type
    )

    response_body = json.loads(response.get('body').read())

    return response_body


def main():
    """
    Entrypoint for Amazon Titan Embeddings G1 - Text example.
    """

    logging.basicConfig(level=logging.INFO,
                        format="%(levelname)s: %(message)s")

    model_id = "amazon.titan-embed-text-v1"
    input_text = "What are the different services that you offer?"


    # Create request body.
    body = json.dumps({
        "inputText": input_text,
    })


    try:

        response = generate_embedding(model_id, body)

        print(f"Generated an embedding: {response['embedding']}")
        print(f"Input Token count:  {response['inputTextTokenCount']}")

    except ClientError as err:
        message = err.response["Error"]["Message"]
        logger.error("A client error occurred: %s", message)
        print("A client error occured: " +
              format(message))

    else:
        print(f"Finished generating an embedding with Amazon Titan Embeddings G1 - Text model {model_id}.")


if __name__ == "__main__":
    main()
```

------
#### [ Amazon Titan Text Embeddings V2 ]

使用 Titan Text Embeddings V2 时，如果 `embeddingTypes` 仅包含 `binary`，则 `embedding` 字段不会出现在响应中。

```
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
# SPDX-License-Identifier: Apache-2.0
"""
Shows how to generate an embedding with the Amazon Titan Text Embeddings V2 Model
"""

import json
import logging
import boto3


from botocore.exceptions import ClientError


logger = logging.getLogger(__name__)
logging.basicConfig(level=logging.INFO)


def generate_embedding(model_id, body):
    """
    Generate an embedding with the vector representation of a text input using Amazon Titan Text Embeddings G1 on demand.
    Args:
        model_id (str): The model ID to use.
        body (str) : The request body to use.
    Returns:
        response (JSON): The embedding created by the model and the number of input tokens.
    """

    logger.info("Generating an embedding with Amazon Titan Text Embeddings V2 model %s", model_id)

    bedrock = boto3.client(service_name='bedrock-runtime')

    accept = "application/json"
    content_type = "application/json"

    response = bedrock.invoke_model(
        body=body, modelId=model_id, accept=accept, contentType=content_type
    )

    response_body = json.loads(response.get('body').read())

    return response_body


def main():
    """
    Entrypoint for Amazon Titan Embeddings V2 - Text example.
    """

    logging.basicConfig(level=logging.INFO,
                        format="%(levelname)s: %(message)s")

    model_id = "amazon.titan-embed-text-v2:0"
    input_text = "What are the different services that you offer?"


    # Create request body.
    body = json.dumps({
        "inputText": input_text,
        "embeddingTypes": ["binary"]
    })


    try:

        response = generate_embedding(model_id, body)

        print(f"Generated an embedding: {response['embeddingsByType']['binary']}") # returns binary embedding
        print(f"Input text: {input_text}")
        print(f"Input Token count:  {response['inputTextTokenCount']}")

    except ClientError as err:
        message = err.response["Error"]["Message"]
        logger.error("A client error occurred: %s", message)
        print("A client error occured: " +
              format(message))

    else:
        print(f"Finished generating an embedding with Amazon Titan Text Embeddings V2 model {model_id}.")


if __name__ == "__main__":
    main()
```

------

# Amazon Titan Multimodal Embeddings G1
<a name="model-parameters-titan-embed-mm"></a>

本部分介绍了请求和响应正文格式以及使用 Amazon Titan Multimodal Embeddings G1 的代码示例。

**Topics**
+ [请求和响应](#model-parameters-titan-embed-mm-request-response)
+ [代码示例](#api-inference-examples-titan-embed-mm)

## 请求和响应
<a name="model-parameters-titan-embed-mm-request-response"></a>

请求正文在 [InvokeModel](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InvokeModel.html) 请求的 `body` 字段中传递。

------
#### [ Request ]

Amazon Titan Multimodal Embeddings G1 的请求正文包括以下字段。

```
{
    "inputText": string,
    "inputImage": base64-encoded string,
    "embeddingConfig": {
        "outputEmbeddingLength": 256 | 384 | 1024
    }
}
```

至少包括以下任一字段作为必填字段。同时包括这两个字段可生成一个嵌入向量，对生成的文本嵌入向量和图像嵌入向量求平均值。
+ **inputText** – 输入要转换为嵌入向量的文本。
+ **inputImage** – 以 base64 格式对要转换为嵌入向量的图像进行编码，并在此字段中输入字符串。有关如何将图像编码为 base64 以及如何对 base64 编码的字符串进行解码并将其转换为图像的示例，请参阅[代码示例](#api-inference-examples-titan-embed-mm)。

以下字段是可选字段。
+ **embeddingConfig** – 包含 `outputEmbeddingLength` 字段，用于为输出嵌入向量指定以下长度之一。
  + 256
  + 384
  + 1024（默认值）

------
#### [ Response ]

响应的 `body` 包含以下字段。

```
{
    "embedding": [float, float, ...],
    "inputTextTokenCount": int,
    "message": string
}
```

字段如下所述。
+ **embedding** – 一个数组，表示您提供的输入的嵌入向量。
+ **inputTextTokenCount** – 文本输入中的词元数量。
+ **message** – 指定生成过程中出现的任何错误。

------

## 代码示例
<a name="api-inference-examples-titan-embed-mm"></a>

以下示例展示了如何在 Python SDK 中使用按需吞吐量调用 Amazon Titan Multimodal Embeddings G1 模型。选择一个选项卡查看每个用例的示例。

------
#### [ Text embeddings ]

此示例展示了如何调用 Amazon Titan Multimodal Embeddings G1 模型生成文本嵌入。

```
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
# SPDX-License-Identifier: Apache-2.0
"""
Shows how to generate embeddings from text with the Amazon Titan Multimodal Embeddings G1 model (on demand).
"""

import json
import logging
import boto3


from botocore.exceptions import ClientError

class EmbedError(Exception):
    "Custom exception for errors returned by Amazon Titan Multimodal Embeddings G1"

    def __init__(self, message):
        self.message = message

logger = logging.getLogger(__name__)
logging.basicConfig(level=logging.INFO)


def generate_embeddings(model_id, body):
    """
    Generate a vector of embeddings for a text input using Amazon Titan Multimodal Embeddings G1 on demand.
    Args:
        model_id (str): The model ID to use.
        body (str) : The request body to use.
    Returns:
        response (JSON): The embeddings that the model generated, token information, and the
        reason the model stopped generating embeddings.
    """

    logger.info("Generating embeddings with Amazon Titan Multimodal Embeddings G1 model %s", model_id)

    bedrock = boto3.client(service_name='bedrock-runtime')

    accept = "application/json"
    content_type = "application/json"

    response = bedrock.invoke_model(
        body=body, modelId=model_id, accept=accept, contentType=content_type
    )

    response_body = json.loads(response.get('body').read())

    finish_reason = response_body.get("message")

    if finish_reason is not None:
        raise EmbedError(f"Embeddings generation error: {finish_reason}")

    return response_body


def main():
    """
    Entrypoint for Amazon Titan Multimodal Embeddings G1 example.
    """

    logging.basicConfig(level=logging.INFO,
                        format="%(levelname)s: %(message)s")

    model_id = "amazon.titan-embed-image-v1"
    input_text = "What are the different services that you offer?"
    output_embedding_length = 256

    # Create request body.
    body = json.dumps({
        "inputText": input_text,
        "embeddingConfig": {
            "outputEmbeddingLength": output_embedding_length
        }
    })


    try:

        response = generate_embeddings(model_id, body)

        print(f"Generated text embeddings of length {output_embedding_length}: {response['embedding']}")
        print(f"Input text token count:  {response['inputTextTokenCount']}")

    except ClientError as err:
        message = err.response["Error"]["Message"]
        logger.error("A client error occurred: %s", message)
        print("A client error occured: " +
              format(message))
        
    except EmbedError as err:
        logger.error(err.message)
        print(err.message)

    else:
        print(f"Finished generating text embeddings with Amazon Titan Multimodal Embeddings G1 model {model_id}.")


if __name__ == "__main__":
    main()
```

------
#### [ Image embeddings ]

此示例展示了如何调用 Amazon Titan Multimodal Embeddings G1 模型生成图像嵌入。

```
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
# SPDX-License-Identifier: Apache-2.0
"""
Shows how to generate embeddings from an image with the Amazon Titan Multimodal Embeddings G1 model (on demand).
"""

import base64
import json
import logging
import boto3

from botocore.exceptions import ClientError

class EmbedError(Exception):
    "Custom exception for errors returned by Amazon Titan Multimodal Embeddings G1"

    def __init__(self, message):
        self.message = message

logger = logging.getLogger(__name__)
logging.basicConfig(level=logging.INFO)


def generate_embeddings(model_id, body):
    """
    Generate a vector of embeddings for an image input using Amazon Titan Multimodal Embeddings G1 on demand.
    Args:
        model_id (str): The model ID to use.
        body (str) : The request body to use.
    Returns:
        response (JSON): The embeddings that the model generated, token information, and the
        reason the model stopped generating embeddings.
    """

    logger.info("Generating embeddings with Amazon Titan Multimodal Embeddings G1 model %s", model_id)

    bedrock = boto3.client(service_name='bedrock-runtime')

    accept = "application/json"
    content_type = "application/json"

    response = bedrock.invoke_model(
        body=body, modelId=model_id, accept=accept, contentType=content_type
    )

    response_body = json.loads(response.get('body').read())

    finish_reason = response_body.get("message")

    if finish_reason is not None:
        raise EmbedError(f"Embeddings generation error: {finish_reason}")

    return response_body


def main():
    """
    Entrypoint for Amazon Titan Multimodal Embeddings G1 example.
    """

    logging.basicConfig(level=logging.INFO,
                        format="%(levelname)s: %(message)s")

    # Read image from file and encode it as base64 string.
    with open("/path/to/image", "rb") as image_file:
        input_image = base64.b64encode(image_file.read()).decode('utf8')

    model_id = 'amazon.titan-embed-image-v1'
    output_embedding_length = 256

    # Create request body.
    body = json.dumps({
        "inputImage": input_image,
        "embeddingConfig": {
            "outputEmbeddingLength": output_embedding_length
        }
    })


    try:

        response = generate_embeddings(model_id, body)

        print(f"Generated image embeddings of length {output_embedding_length}: {response['embedding']}")

    except ClientError as err:
        message = err.response["Error"]["Message"]
        logger.error("A client error occurred: %s", message)
        print("A client error occured: " +
              format(message))
        
    except EmbedError as err:
        logger.error(err.message)
        print(err.message)

    else:
        print(f"Finished generating image embeddings with Amazon Titan Multimodal Embeddings G1 model {model_id}.")


if __name__ == "__main__":
    main()
```

------
#### [ Text and image embeddings ]

此示例展示了如何调用 Amazon Titan Multimodal Embeddings G1 模型从文本和图像组合输入生成嵌入。生成的向量是生成的文本嵌入向量和图像嵌入向量的平均值。

```
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
# SPDX-License-Identifier: Apache-2.0
"""
Shows how to generate embeddings from an image and accompanying text with the Amazon Titan Multimodal Embeddings G1 model (on demand).
"""

import base64
import json
import logging
import boto3

from botocore.exceptions import ClientError

class EmbedError(Exception):
    "Custom exception for errors returned by Amazon Titan Multimodal Embeddings G1"

    def __init__(self, message):
        self.message = message

logger = logging.getLogger(__name__)
logging.basicConfig(level=logging.INFO)


def generate_embeddings(model_id, body):
    """
    Generate a vector of embeddings for a combined text and image input using Amazon Titan Multimodal Embeddings G1 on demand.
    Args:
        model_id (str): The model ID to use.
        body (str) : The request body to use.
    Returns:
        response (JSON): The embeddings that the model generated, token information, and the
        reason the model stopped generating embeddings.
    """

    logger.info("Generating embeddings with Amazon Titan Multimodal Embeddings G1 model %s", model_id)

    bedrock = boto3.client(service_name='bedrock-runtime')

    accept = "application/json"
    content_type = "application/json"

    response = bedrock.invoke_model(
        body=body, modelId=model_id, accept=accept, contentType=content_type
    )

    response_body = json.loads(response.get('body').read())

    finish_reason = response_body.get("message")

    if finish_reason is not None:
        raise EmbedError(f"Embeddings generation error: {finish_reason}")

    return response_body


def main():
    """
    Entrypoint for Amazon Titan Multimodal Embeddings G1 example.
    """

    logging.basicConfig(level=logging.INFO,
                        format="%(levelname)s: %(message)s")

    model_id = "amazon.titan-embed-image-v1"
    input_text = "A family eating dinner"
    # Read image from file and encode it as base64 string.
    with open("/path/to/image", "rb") as image_file:
        input_image = base64.b64encode(image_file.read()).decode('utf8')
    output_embedding_length = 256

    # Create request body.
    body = json.dumps({
        "inputText": input_text,
        "inputImage": input_image,
        "embeddingConfig": {
            "outputEmbeddingLength": output_embedding_length
        }
    })


    try:

        response = generate_embeddings(model_id, body)

        print(f"Generated embeddings of length {output_embedding_length}: {response['embedding']}")
        print(f"Input text token count:  {response['inputTextTokenCount']}")

    except ClientError as err:
        message = err.response["Error"]["Message"]
        logger.error("A client error occurred: %s", message)
        print("A client error occured: " +
              format(message))
        
    except EmbedError as err:
        logger.error(err.message)
        print(err.message)

    else:
        print(f"Finished generating embeddings with Amazon Titan Multimodal Embeddings G1 model {model_id}.")


if __name__ == "__main__":
    main()
```

------