

本文為英文版的機器翻譯版本，如內容有任何歧義或不一致之處，概以英文版為準。

# Amazon Titan 模型
<a name="model-parameters-titan"></a>

本節說明 Amazon Titan 模型的請求參數和回應欄位。使用此資訊透過 [InvokeModel](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InvokeModel.html) 和 [InvokeModelWithResponseStream](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InvokeModelWithResponseStream.html) (串流) 操作對 Amazon Titan 模型進行推論呼叫。本節也包含 Python 程式碼範例，示範如何呼叫 Amazon Titan 模型。若要在推論操作中使用模型，您需要模型的模型 ID。若要取得模型 ID，請參閱[Amazon Bedrock 中支援的基礎模型](models-supported.md)。某些模型也可以使用 [Converse API](conversation-inference.md)。若要檢查 Converse API 是否支援特定 Amazon Titan模型，請參閱 [支援的模型和模型功能](conversation-inference-supported-models-features.md)。如需更多程式碼範例，請參閱 [使用 AWS SDKs Amazon Bedrock 程式碼範例](service_code_examples.md)。

Amazon Bedrock 中的基礎模型支援輸入和輸出模態，因模型而異。若要檢查 Amazon Titan 模型支援的模態，請參閱 [Amazon Bedrock 中支援的基礎模型](models-supported.md)。若要檢查 Amazon Titan 模型支援的 Amazon Bedrock 功能，請參閱 [Amazon Bedrock 中支援的基礎模型](models-supported.md)。若要檢查哪些 AWS 區域提供 Amazon Titan 模型，請參閱 [Amazon Bedrock 中支援的基礎模型](models-supported.md)。

當您使用 Amazon Titan 模型進行推論呼叫時，您會包含模型的提示。如需建立 Amazon Bedrock 支援之模型提示的相關資訊，請參閱 [提示工程概念](prompt-engineering-guidelines.md)。

**Topics**
+ [Amazon Titan Text 模型](model-parameters-titan-text.md)
+ [Amazon Titan 圖像生成器 G1 模型](model-parameters-titan-image.md)
+ [Amazon Titan Embeddings G1 - Text](model-parameters-titan-embed-text.md)
+ [Amazon Titan Multimodal Embeddings G1](model-parameters-titan-embed-mm.md)

# Amazon Titan Text 模型
<a name="model-parameters-titan-text"></a>

Amazon Titan Text 支援以下推論參數。

如需 Titan Text 提示工程指導方針的詳細資訊，請參閱 [Titan Text 提示工程指導方針](https://d2eo22ngex1n9g.cloudfront.net/Documentation/User+Guides/Titan/Amazon+Titan+Text+Prompt+Engineering+Guidelines.pdf)。

如需 Titan 模型的詳細資訊，請參閱 [Amazon Titan 模型概觀](titan-models.md)。

**Topics**
+ [請求與回應](#model-parameters-titan-request-response)
+ [程式碼範例](#inference-titan-code)

## 請求與回應
<a name="model-parameters-titan-request-response"></a>

請求內文在 [InvokeModel](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InvokeModel.html) 或 [InvokeModelWithResponseStream](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InvokeModelWithResponseStream.html) 請求的 `body` 欄位中傳遞。

------
#### [ Request ]

```
{
    "inputText": string,
    "textGenerationConfig": {
        "temperature": float,  
        "topP": float,
        "maxTokenCount": int,
        "stopSequences": [string]
    }
}
```

下列是必要參數：
+ **inputText** – 可提供模型以產生回應的提示。若要以交談風格產生回應，請使用下列格式提交提示：

  ```
  "inputText": "User: <theUserPrompt>\nBot:"
  ```

  此格式會向模型指出，在使用者提供提示之後，應該在新行上回應。

`textGenerationConfig` 為選用。您可以用於設定以下[推論參數](inference-parameters.md)：
+ **temperature** – 使用較低的值來降低回應中的隨機性。  
****    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/zh_tw/bedrock/latest/userguide/model-parameters-titan-text.html)
+ **topP** – 使用的值越低，會忽略較不可能的選項，並減少回應的多樣性。  
****    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/zh_tw/bedrock/latest/userguide/model-parameters-titan-text.html)
+ **maxTokenCount** – 指定要在回應中產生的字符數目上限。字符限制上限會嚴格強制執行。  
****    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/zh_tw/bedrock/latest/userguide/model-parameters-titan-text.html)
+ **stopSequences** – 指定字元序列以指定模型應在何處停止。

------
#### [ InvokeModel Response ]

```
{
    "inputTextTokenCount": int,
    "results": [{
        "tokenCount": int,
        "outputText": "\n<response>\n",
        "completionReason": "string"
    }]
}
```

回應內文包含以下欄位：
+ **inputTextTokenCount** – 提示中的字符數量。
+ **results** – 一個項目的陣列，亦即包含下列欄位的物件：
  + **tokenCount** – 回應中的字符數量。
  + **outputText** – 回應中的文字。
  + **completionReason** – 完成產生回應的原因。以下是可能的原因：
    + FINISHED – 回應已完全產生。
    + LENGTH – 回應因您設定的回應長度而被截斷。
    + STOP\$1CRITERIA\$1MET – 由於已達到停止條件，因此回應遭到截斷。
    + RAG\$1QUERY\$1WHEN\$1RAG\$1DISABLED – 此功能已停用且無法完成查詢。
    + CONTENT\$1FILTERED – 內容已由套用的內容篩選條件篩選或移除。

------
#### [ InvokeModelWithResponseStream Response ]

回應串流的內文中每個區塊的文字格式如下所示。您必須解碼 `bytes` 欄位 (請參閱 [使用 InvokeModel 提交單一提示](inference-invoke.md) 中的範例)。

```
{
    "chunk": {
        "bytes": b'{
            "index": int,
            "inputTextTokenCount": int,
            "totalOutputTextTokenCount": int,
            "outputText": "<response-chunk>",
            "completionReason": "string"
        }'
    }
}
```
+ **index** – 串流回應中區塊的索引。
+ **inputTextTokenCount** – 提示中的字符數量。
+ **totalOutputTextTokenCount** – 回應中的字符數量。
+ **outputText** – 回應中的文字。
+ **completionReason** – 完成產生回應的原因。以下是可能的原因。
  + FINISHED – 回應已完全產生。
  + LENGTH – 回應因您設定的回應長度而被截斷。
  + STOP\$1CRITERIA\$1MET – 由於已達到停止條件，因此回應遭到截斷。
  + RAG\$1QUERY\$1WHEN\$1RAG\$1DISABLED – 此功能已停用且無法完成查詢。
  + CONTENT\$1FILTERED – 內容已由套用的篩選條件篩選或移除。

------

## 程式碼範例
<a name="inference-titan-code"></a>

下列範例展示如何使用具 Python SDK 的 Amazon Titan Text Premier 模型進行推論。

```
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
# SPDX-License-Identifier: Apache-2.0
"""
Shows how to create a list of action items from a meeting transcript
with the Amazon Titan Text model (on demand).
"""
import json
import logging
import boto3

from botocore.exceptions import ClientError


class ImageError(Exception):
    "Custom exception for errors returned by Amazon Titan Text models"

    def __init__(self, message):
        self.message = message


logger = logging.getLogger(__name__)
logging.basicConfig(level=logging.INFO)


def generate_text(model_id, body):
    """
    Generate text using Amazon Titan Text models on demand.
    Args:
        model_id (str): The model ID to use.
        body (str) : The request body to use.
    Returns:
        response (json): The response from the model.
    """

    logger.info(
        "Generating text with Amazon Titan Text model %s", model_id)

    bedrock = boto3.client(service_name='bedrock-runtime')

    accept = "application/json"
    content_type = "application/json"

    response = bedrock.invoke_model(
        body=body, modelId=model_id, accept=accept, contentType=content_type
    )
    response_body = json.loads(response.get("body").read())

    finish_reason = response_body.get("error")

    if finish_reason is not None:
        raise ImageError(f"Text generation error. Error is {finish_reason}")

    logger.info(
        "Successfully generated text with Amazon Titan Text model %s", model_id)

    return response_body


def main():
    """
    Entrypoint for Amazon Titan Text model example.
    """
    try:
        logging.basicConfig(level=logging.INFO,
                            format="%(levelname)s: %(message)s")

        # You can replace the model_id with any other Titan Text Models
        # Titan Text Model family model_id is as mentioned below:
        # amazon.titan-text-premier-v1:0, amazon.titan-text-express-v1, amazon.titan-text-lite-v1
        model_id = 'amazon.titan-text-premier-v1:0'

        prompt = """Meeting transcript: Miguel: Hi Brant, I want to discuss the workstream  
            for our new product launch Brant: Sure Miguel, is there anything in particular you want
            to discuss? Miguel: Yes, I want to talk about how users enter into the product.
            Brant: Ok, in that case let me add in Namita. Namita: Hey everyone 
            Brant: Hi Namita, Miguel wants to discuss how users enter into the product.
            Miguel: its too complicated and we should remove friction.  
            for example, why do I need to fill out additional forms?  
            I also find it difficult to find where to access the product
            when I first land on the landing page. Brant: I would also add that
            I think there are too many steps. Namita: Ok, I can work on the
            landing page to make the product more discoverable but brant
            can you work on the additonal forms? Brant: Yes but I would need 
            to work with James from another team as he needs to unblock the sign up workflow.
            Miguel can you document any other concerns so that I can discuss with James only once?
            Miguel: Sure.
            From the meeting transcript above, Create a list of action items for each person. """

        body = json.dumps({
            "inputText": prompt,
            "textGenerationConfig": {
                "maxTokenCount": 3072,
                "stopSequences": [],
                "temperature": 0.7,
                "topP": 0.9
            }
        })

        response_body = generate_text(model_id, body)
        print(f"Input token count: {response_body['inputTextTokenCount']}")

        for result in response_body['results']:
            print(f"Token count: {result['tokenCount']}")
            print(f"Output text: {result['outputText']}")
            print(f"Completion reason: {result['completionReason']}")

    except ClientError as err:
        message = err.response["Error"]["Message"]
        logger.error("A client error occurred: %s", message)
        print("A client error occured: " +
              format(message))
    except ImageError as err:
        logger.error(err.message)
        print(err.message)

    else:
        print(
            f"Finished generating text with the Amazon Titan Text Premier model {model_id}.")


if __name__ == "__main__":
    main()
```

下列範例展示如何使用具 Python SDK 的 Amazon Titan Text G1 - Express 模型進行推論。

```
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
# SPDX-License-Identifier: Apache-2.0
"""
Shows how to create a list of action items from a meeting transcript
with the Amazon &titan-text-express; model (on demand).
"""
import json
import logging
import boto3

from botocore.exceptions import ClientError


class ImageError(Exception):
    "Custom exception for errors returned by Amazon &titan-text-express; model"

    def __init__(self, message):
        self.message = message


logger = logging.getLogger(__name__)
logging.basicConfig(level=logging.INFO)


def generate_text(model_id, body):
    """
    Generate text using Amazon &titan-text-express; model on demand.
    Args:
        model_id (str): The model ID to use.
        body (str) : The request body to use.
    Returns:
        response (json): The response from the model.
    """

    logger.info(
        "Generating text with Amazon &titan-text-express; model %s", model_id)

    bedrock = boto3.client(service_name='bedrock-runtime')

    accept = "application/json"
    content_type = "application/json"

    response = bedrock.invoke_model(
        body=body, modelId=model_id, accept=accept, contentType=content_type
    )
    response_body = json.loads(response.get("body").read())

    finish_reason = response_body.get("error")

    if finish_reason is not None:
        raise ImageError(f"Text generation error. Error is {finish_reason}")

    logger.info(
        "Successfully generated text with Amazon &titan-text-express; model %s", model_id)

    return response_body


def main():
    """
    Entrypoint for Amazon &titan-text-express; example.
    """
    try:
        logging.basicConfig(level=logging.INFO,
                            format="%(levelname)s: %(message)s")

        model_id = 'amazon.titan-text-express-v1'

        prompt = """Meeting transcript: Miguel: Hi Brant, I want to discuss the workstream  
            for our new product launch Brant: Sure Miguel, is there anything in particular you want
            to discuss? Miguel: Yes, I want to talk about how users enter into the product.
            Brant: Ok, in that case let me add in Namita. Namita: Hey everyone 
            Brant: Hi Namita, Miguel wants to discuss how users enter into the product.
            Miguel: its too complicated and we should remove friction.  
            for example, why do I need to fill out additional forms?  
            I also find it difficult to find where to access the product
            when I first land on the landing page. Brant: I would also add that
            I think there are too many steps. Namita: Ok, I can work on the
            landing page to make the product more discoverable but brant
            can you work on the additonal forms? Brant: Yes but I would need 
            to work with James from another team as he needs to unblock the sign up workflow.
            Miguel can you document any other concerns so that I can discuss with James only once?
            Miguel: Sure.
            From the meeting transcript above, Create a list of action items for each person. """

        body = json.dumps({
            "inputText": prompt,
            "textGenerationConfig": {
                "maxTokenCount": 4096,
                "stopSequences": [],
                "temperature": 0,
                "topP": 1
            }
        })

        response_body = generate_text(model_id, body)
        print(f"Input token count: {response_body['inputTextTokenCount']}")

        for result in response_body['results']:
            print(f"Token count: {result['tokenCount']}")
            print(f"Output text: {result['outputText']}")
            print(f"Completion reason: {result['completionReason']}")

    except ClientError as err:
        message = err.response["Error"]["Message"]
        logger.error("A client error occurred: %s", message)
        print("A client error occured: " +
              format(message))
    except ImageError as err:
        logger.error(err.message)
        print(err.message)

    else:
        print(
            f"Finished generating text with the Amazon &titan-text-express; model {model_id}.")


if __name__ == "__main__":
    main()
```

# Amazon Titan 圖像生成器 G1 模型
<a name="model-parameters-titan-image"></a>

執行模型推論時，Amazon Titan Image Generator G1 V1 和 Titan Image Generator G1 V2 模型支援下列推論參數和模型回應。

**Topics**
+ [推論參數](#model-parameters-titan-image-api)
+ [範例](#model-parameters-titan-image-code-examples)

## 推論參數
<a name="model-parameters-titan-image-api"></a>

當您使用 Amazon Titan 圖像生成器模型進行 [InvokeModel](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InvokeModel.html) 呼叫時，請使用符合您使用案例的格式來取代請求的 `body` 欄位。所有任務都共用 `imageGenerationConfig` 物件，但是每個任務都有一個特定於該項任務的參數物件。支援下列使用案例。


****  

| taskType | 任務參數欄位 | 任務類型 | 定義 | 
| --- | --- | --- | --- | 
| TEXT\$1IMAGE | textToImageParams | 產生 |  使用文字提示產生影像。  | 
| TEXT\$1IMAGE | textToImageParams | 產生 |  (僅限影像調節 V2) 提供額外的輸入調節影像及文字提示，以產生遵循調節影像配置和組成的影像。  | 
| INPAINTING | inPaintingParams | 編輯 |  透過變更*遮罩*內部以符合周圍背景來修改影像。  | 
| OUTPAINTING | outPaintingParams | 編輯 | 透過無縫延伸遮罩定義的區域來修改影像。 | 
| IMAGE\$1VARIATION | imageVariationParams | 編輯 | 透過產生原始影像的變體來修改影像。 | 
| COLOR\$1GUIDED\$1GENERATION (V2 only) | colorGuidedGenerationParams | 產生 | 提供十六進位顏色代碼清單及文字提示，以產生遵循調色盤的影像。 | 
| BACKGROUND\$1REMOVAL (V2 only) | backgroundRemovalParams | 編輯 | 透過識別多個物件並移除背景、輸出背景透明的影像來修改影像。 | 

編輯任務時，在輸入中需要一個 `image` 欄位。此欄位由定義影像中的像素的字串所組成。每個像素由 3 個 RGB 色頻定義，每個色頻的範圍從 0 到 255 (例如，(255 255 0) 代表黃色)。這些色頻均以 base64 編碼。

您使用的影像必須是 JPEG 或 PNG 格式。

如果您執行修圖或擴圖，也需定義*遮罩*，這是定義要修改之影像部分的區域。您有兩種方式可以定義遮罩：
+ `maskPrompt` – 撰寫文字提示以描述要遮罩的影像部分。
+ `maskImage` — 輸入以 base64 編碼的字串，透過將輸入影像中的每個像素標記為 (0 0 0) 或 (255 255 255) 來定義遮罩區域。
  + 定義為 (0 0 0) 的像素是遮罩內的像素。
  + 定義為 (255 255 255) 的像素是指遮罩外部的像素。

  您可以使用相片編輯工具繪製遮罩。然後，您可以將輸出的 JPEG 或 PNG 影像轉換為 base64 編碼，以輸入此欄位。否則，請改用 `maskPrompt` 欄位來允許模型推論遮罩。

選取索引標籤以檢視不同影像產生使用案例的 API 請求內文，以及欄位的說明。

------
#### [ Text-to-image generation (Request) ]

用於產生影像的文字提示必須 <= 512 個字元。長邊解析度 <= 1,408。negativeText (選用) — 文字提示，<= 512 個字元，用於定義影像中不包括的內容。請參閱下表以取得完整的解析度清單。

```
{
    "taskType": "TEXT_IMAGE",
    "textToImageParams": {
        "text": "string",      
        "negativeText": "string"
    },
    "imageGenerationConfig": {
        "quality": "standard" | "premium",
        "numberOfImages": int,
        "height": int,
        "width": int,
        "cfgScale": float,
        "seed": int
    }
}
```

這些 `textToImageParams` 欄位如下所述。
+ **text** (必要) — 用於產生影像的文字提示。
+ **negativeText** (選用) — 文字提示，用於定義影像中不包括的內容。
**注意**  
不要在 `negativeText` 提示中使用否定性字詞。例如，若您不想在影像中包含鏡像，請在 `negativeText` 提示中輸入 **mirrors**。請勿輸入 **no mirrors**。

------
#### [ Inpainting (Request) ]

text (選用) — 文字提示，用來定義遮罩內要變更的內容。如果未包含此欄位，則模型會嘗試將整個遮罩區域取代為背景。必須 <= 512 個字元。negativeText (選用) — 文字提示，用於定義影像中不包括的內容。必須 <= 512 個字元。影像長邊的輸入影像和輸入遮罩大小限制為 <= 1,408。輸出大小與輸入大小相同。

```
{
    "taskType": "INPAINTING",
    "inPaintingParams": {
        "image": "base64-encoded string",                         
        "text": "string",
        "negativeText": "string",        
        "maskPrompt": "string",                      
        "maskImage": "base64-encoded string",   
        "returnMask": boolean # False by default                
    },                                                 
    "imageGenerationConfig": {
        "quality": "standard" | "premium",
        "numberOfImages": int,
        "height": int,
        "width": int,
        "cfgScale": float
    }
}
```

這些 `inPaintingParams` 欄位如下所述。*遮罩*定義您要修改的影像部分。
+ **影像** (必要) — 要修改的 JPEG 或 PNG 影像，格式化為指定像素序列的字串，每個影像都以 RGB 值定義，並以 base64 編碼。有關如何將影像編碼為 base64 並解碼 base64 編碼的字串，再將其轉換為影像的範例，請參閱[程式碼範例](#model-parameters-titan-image-code-examples)。
+ 您必須定義下列其中一個欄位 (不是兩個) 才能定義。
  + **maskPrompt** — 定義遮罩的文字提示。
  + **maskImage** — 藉由指定與 `image` 相同大小的像素序列來定義遮罩的字串。每個像素都會變成 RGB 值 (0 0 0) (遮罩內的像素) 或 (255 255 255) (遮罩外的像素)。有關如何將影像編碼為 base64 並解碼 base64 編碼的字串，再將其轉換為影像的範例，請參閱[程式碼範例](#model-parameters-titan-image-code-examples)。
+ **text** (選用) — 文字提示，用來定義遮罩內要變更的內容。如果未包含此欄位，則模型會嘗試將整個遮罩區域取代為背景。
+ **negativeText** (選用) — 文字提示，用於定義影像中不包括的內容。
**注意**  
不要在 `negativeText` 提示中使用否定性字詞。例如，若您不想在影像中包含鏡像，請在 `negativeText` 提示中輸入 **mirrors**。請勿輸入 **no mirrors**。

------
#### [ Outpainting (Request) ]

text (必要) — 文字提示，用來定義遮罩外要變更的內容。必須 <= 512 個字元。negativeText (選用) — 文字提示，用於定義影像中不包括的內容。必須 <= 512 個字元。影像長邊的輸入影像和輸入遮罩大小限制為 <= 1,408。輸出大小與輸入大小相同。

```
{
    "taskType": "OUTPAINTING",
    "outPaintingParams": {
        "text": "string",
        "negativeText": "string",        
        "image": "base64-encoded string",                         
        "maskPrompt": "string",                      
        "maskImage": "base64-encoded string",    
        "returnMask": boolean, # False by default                                         
        "outPaintingMode": "DEFAULT | PRECISE"                 
    },                                                 
    "imageGenerationConfig": {
        "quality": "standard" | "premium",
        "numberOfImages": int,
        "height": int,
        "width": int,
        "cfgScale": float
    }
}
```

`outPaintingParams` 欄位定義如下。*遮罩*定義您不想要修改的影像區域。產生無縫延伸您定義的區域。
+ **影像** (必要) — 要修改的 JPEG 或 PNG 影像，格式化為指定像素序列的字串，每個影像都以 RGB 值定義，並以 base64 編碼。有關如何將影像編碼為 base64 並解碼 base64 編碼的字串，再將其轉換為影像的範例，請參閱[程式碼範例](#model-parameters-titan-image-code-examples)。
+ 您必須定義下列其中一個欄位 (不是兩個) 才能定義。
  + **maskPrompt** — 定義遮罩的文字提示。
  + **maskImage** — 藉由指定與 `image` 相同大小的像素序列來定義遮罩的字串。每個像素都會變成 RGB 值 (0 0 0) (遮罩內的像素) 或 (255 255 255) (遮罩外的像素)。有關如何將影像編碼為 base64 並解碼 base64 編碼的字串，再將其轉換為影像的範例，請參閱[程式碼範例](#model-parameters-titan-image-code-examples)。
+ **text** (必要) — 文字提示，用來定義遮罩外要變更的內容。
+ **negativeText** (選用) — 文字提示，用於定義影像中不包括的內容。
**注意**  
不要在 `negativeText` 提示中使用否定性字詞。例如，若您不想在影像中包含鏡像，請在 `negativeText` 提示中輸入 **mirrors**。請勿輸入 **no mirrors**。
+ **outPaintingMode** – 指定是否允許修改遮罩內的像素。可能的值如下。
  + DEFAULT — 使用此選項可修改遮罩內部的影像，使其與重建的背景保持一致。
  + PRECISE — 使用此選項可防止修改遮罩內的影像。

------
#### [ Image variation (Request) ]

影像變體可讓您根據參數值建立原始影像的變體。影像長邊的輸入影像大小限制為 <= 1,408。請參閱下表以取得完整的解析度清單。
+ text (選用) — 文字提示，可定義影像中要保留以及要變更的內容。必須 <= 512 個字元。
+ negativeText (選用) — 文字提示，用於定義影像中不包括的內容。必須 <= 512 個字元。
+ text (選用) — 文字提示，可定義影像中要保留以及要變更的內容。必須 <= 512 個字元。
+ similarityStrength (選用) – 指定產生的影像與輸入影像的相似程度。使用的值越低，產生的過程愈隨機。可接受的範圍介於 0.2 到 1.0 (皆含) 之間，如果請求中缺少此參數，則會使用預設值 0.7。

```
{
     "taskType": "IMAGE_VARIATION",
     "imageVariationParams": {
         "text": "string",
         "negativeText": "string",
         "images": ["base64-encoded string"],
         "similarityStrength": 0.7,  # Range: 0.2 to 1.0
     },
     "imageGenerationConfig": {
         "quality": "standard" | "premium",
         "numberOfImages": int,
         "height": int,
         "width": int,
         "cfgScale": float
     }
}
```

`imageVariationParams` 欄位定義如下。
+ **影像** (必要) - 為其產生變體的影像清單。您可以包含 1 到 5 個影像。影像定義為 base64 編碼的影像字串。有關如何將影像編碼為 base64 並解碼 base64 編碼的字串，再將其轉換為影像的範例，請參閱[程式碼範例](#model-parameters-titan-image-code-examples)。
+ **text** (選用) — 文字提示，可定義影像中要保留以及要變更的內容。
+ **similarityStrength** (選用) — 指定產生的影像應與輸入影像的相似程度。範圍介於 0.2 到 1.0 之間，值越低，隨機性越高。
+ **negativeText** (選用) — 文字提示，用於定義影像中不包括的內容。
**注意**  
不要在 `negativeText` 提示中使用否定性字詞。例如，若您不想在影像中包含鏡像，請在 `negativeText` 提示中輸入 **mirrors**。請勿輸入 **no mirrors**。

------
#### [ Conditioned Image Generation (Request) V2 only ]

條件式影像產生任務類型可讓客戶透過提供「條件影像」來增強文字轉影像產生，藉此能更精細地控制產生的影像。
+ 粗糙的邊緣偵測
+ 分段對應

用於產生影像的文字提示必須 <= 512 個字元。長邊解析度 <= 1,408。negativeText (選用) 為文字提示，<= 512 個字元，用於定義影像中不包括的內容。請參閱下表以取得完整的解析度清單。

```
{
    "taskType": "TEXT_IMAGE",
    "textToImageParams": {
        "text": "string",      
        "negativeText": "string",
        "conditionImage": "base64-encoded string", # [OPTIONAL] base64 encoded image
        "controlMode": "string", # [OPTIONAL] CANNY_EDGE | SEGMENTATION. DEFAULT: CANNY_EDGE
        "controlStrength": float # [OPTIONAL] weight given to the condition image. DEFAULT: 0.7
    },
    "imageGenerationConfig": {
        "quality": "standard" | "premium",
        "numberOfImages": int,
        "height": int,
        "width": int,
        "cfgScale": float,
        "seed": int
    }
}
```
+ **text** (必要) — 用於產生影像的文字提示。
+ **negativeText** (選用) — 文字提示，用於定義影像中不包括的內容。
**注意**  
不要在 `negativeText` 提示中使用否定性字詞。例如，若您不想在影像中包含鏡像，請在 `negativeText` 提示中輸入 **mirrors**。請勿輸入 **no mirrors**。
+ **conditionImage** (僅 V2 選用) — 單一輸入調節影像，可引導所產生影像的配置和組成。影像定義為 base64 編碼的影像字串。有關如何將影像編碼為 base64 並解碼 base64 編碼的字串，再將其轉換為影像的範例。
+ **controlMode** (僅 V2 選用) — 指定應使用的調節模式類型。支援兩種類型的調節模式：CANNY\$1EDGE 和 SEGMENTATION。預設值為 CANNY\$1EDGE。
+ **controlStrength** (僅 V2 選用) — 指定所產生影像的配置和組成應與 conditioningImage 的相似程度。範圍介於 0 到 1.0 之間，值越低，隨機性越高。預設值為 0.7。

**注意**  
如果提供 controlMode 或 controlStrength，則也必須提供 conditionImage。

------
#### [ Color Guided Content (Request) V2 only ]

提供十六進位顏色代碼清單及文字提示，以產生遵循調色盤的影像。產生影像所需的文字提示必須 <= 512 個字元。長邊解析度上限為 1,408。需要含有 1 到 10 個十六進位顏色代碼的清單，才能在產生的影像中指定顏色。negativeText (選用) 用於定義影像中不包含哪些內容的文字提示，<= 512 個字元。referenceImage (選用) 額外的參考影像，用於引導產生影像中的調色盤。長邊的使用者上傳 RGB 參考影像的大小限制為 <= 1,408。

```
{
    "taskType": "COLOR_GUIDED_GENERATION",
    "colorGuidedGenerationParams": {
        "text": "string",      
        "negativeText": "string",
        "referenceImage" "base64-encoded string", # [OPTIONAL]
        "colors": ["string"] # list of color hex codes
    },
    "imageGenerationConfig": {
        "quality": "standard" | "premium",
        "numberOfImages": int,
        "height": int,
        "width": int,
        "cfgScale": float,
        "seed": int
    }
}
```

colorGuidedGenerationParams 欄位如下所述。請注意，此參數僅適用於 V2。
+ **text** (必要) — 用於產生影像的文字提示。
+ **color** (必要) — 最多含 10 個十六進位顏色代碼的清單，用於在產生的影像中指定顏色。
+ **negativeText** (選用) — 文字提示，用於定義影像中不包括的內容。
**注意**  
不要在 `negativeText` 提示中使用否定性字詞。例如，若您不想在影像中包含鏡像，請在 `negativeText` 提示中輸入 **mirrors**。請勿輸入 **no mirrors**。
+ **referenceImage** (選用) — 單一輸入參考影像，用於引導所產生影像的調色盤。影像定義為 base64 編碼的影像字串。

------
#### [ Background Removal (Request) ]

背景移除任務類型會自動識別輸入影像中的多個物件，並移除背景。輸出影像的背景是透明的。

**要求格式**

```
{
    "taskType": "BACKGROUND_REMOVAL",
    "backgroundRemovalParams": {
        "image": "base64-encoded string"
    }
}
```

**回應格式**

```
{
  "images": [
    "base64-encoded string", 
    ...
  ],
  "error": "string" 
}
```

backgroundRemovalParams 欄位如下所述。
+ **影像** (必要) — 要修改的 JPEG 或 PNG 影像，格式化為指定像素序列的字串，每個影像都以 RGB 值定義，並以 base64 編碼。

------
#### [ Response body ]

```
{
  "images": [
    "base64-encoded string", 
    ...
  ],
  "error": "string" 
}
```

回應主體是包含下列其中一個欄位的串流物件。
+ `images` – 如果請求成功，則會傳回此欄位 base64 編碼字串的清單，每個字串都會定義產生的影像。每個影像都會格式化為字串來指定一系列像素，每個像素均以 RGB 值定義並以 base64 編碼。有關如何將影像編碼為 base64 並解碼 base64 編碼的字串，再將其轉換為影像的範例，請參閱[程式碼範例](#model-parameters-titan-image-code-examples)。
+ `error` – 如果在下列其中一種情況下，請求違反內容管制政策，則會在此欄位中傳回訊息。
  + 如果輸入文字、影像或遮罩影像已由內容管制政策加上旗標。
  + 如果內容管制政策在至少一個輸出影像加上旗標

------

共享和選用的 `imageGenerationConfig` 包含下列欄位。如果您未包含此物件，則使用預設組態。
+ **quality** — 影像的品質。預設值為 `standard`。如需定價的詳細資訊，請參閱 [Amazon Bedrock 定價](https://aws.amazon.com/bedrock/pricing/)。
+ **numberOfImages** (選用) — 要產生的影像數量。  
****    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/zh_tw/bedrock/latest/userguide/model-parameters-titan-image.html)
+ **cfgScale** (選用) — 指定產生的影像應遵循提示的強度。使用較低的值可在產生時導入更多隨機性。  
****    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/zh_tw/bedrock/latest/userguide/model-parameters-titan-image.html)
+ 以下參數定義您希望的輸出影像大小。如需有關按照影像大小定價的詳細資訊，請參閱 [Amazon Bedrock 定價](https://aws.amazon.com/bedrock/pricing/)。
  + **height** (選用) – 影像的高度 (以像素為單位)。預設值為 1408。
  + **width** (選用) – 影像的寬度 (以像素為單位)。預設值為 1408。

  以下是允許的尺寸。  
****    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/zh_tw/bedrock/latest/userguide/model-parameters-titan-image.html)
+ **seed** (選用) — 用來控制和重現結果。決定初始雜訊設定。使用與先前執行相同的種子和相同的設定，以允許推論建立相似的影像。  
****    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/zh_tw/bedrock/latest/userguide/model-parameters-titan-image.html)

## 範例
<a name="model-parameters-titan-image-code-examples"></a>

下列範例展示如何在 Python SDK 中以隨需輸送量調用 Amazon Titan 圖像生成器模型。選取標籤以檢視每個使用案例的範例。每個範例都會在最後顯示影像。

------
#### [ Text-to-image generation ]

```
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
# SPDX-License-Identifier: Apache-2.0
"""
Shows how to generate an image from a text prompt with the Amazon Titan Image Generator G1 model (on demand).
"""
import base64
import io
import json
import logging
import boto3
from PIL import Image

from botocore.exceptions import ClientError


class ImageError(Exception):
    "Custom exception for errors returned by Amazon Titan Image Generator G1"

    def __init__(self, message):
        self.message = message


logger = logging.getLogger(__name__)
logging.basicConfig(level=logging.INFO)


def generate_image(model_id, body):
    """
    Generate an image using Amazon Titan Image Generator G1 model on demand.
    Args:
        model_id (str): The model ID to use.
        body (str) : The request body to use.
    Returns:
        image_bytes (bytes): The image generated by the model.
    """

    logger.info(
        "Generating image with Amazon Titan Image Generator G1 model %s", model_id)

    bedrock = boto3.client(service_name='bedrock-runtime')

    accept = "application/json"
    content_type = "application/json"

    response = bedrock.invoke_model(
        body=body, modelId=model_id, accept=accept, contentType=content_type
    )
    response_body = json.loads(response.get("body").read())

    base64_image = response_body.get("images")[0]
    base64_bytes = base64_image.encode('ascii')
    image_bytes = base64.b64decode(base64_bytes)

    finish_reason = response_body.get("error")

    if finish_reason is not None:
        raise ImageError(f"Image generation error. Error is {finish_reason}")

    logger.info(
        "Successfully generated image with Amazon Titan Image Generator G1 model %s", model_id)

    return image_bytes


def main():
    """
    Entrypoint for Amazon Titan Image Generator G1 example.
    """

    logging.basicConfig(level=logging.INFO,
                        format="%(levelname)s: %(message)s")

    model_id = 'amazon.titan-image-generator-v1'

    prompt = """A photograph of a cup of coffee from the side."""

    body = json.dumps({
        "taskType": "TEXT_IMAGE",
        "textToImageParams": {
            "text": prompt
        },
        "imageGenerationConfig": {
            "numberOfImages": 1,
            "height": 1024,
            "width": 1024,
            "cfgScale": 8.0,
            "seed": 0
        }
    })

    try:
        image_bytes = generate_image(model_id=model_id,
                                     body=body)
        image = Image.open(io.BytesIO(image_bytes))
        image.show()

    except ClientError as err:
        message = err.response["Error"]["Message"]
        logger.error("A client error occurred: %s", message)
        print("A client error occured: " +
              format(message))
    except ImageError as err:
        logger.error(err.message)
        print(err.message)

    else:
        print(
            f"Finished generating image with Amazon Titan Image Generator G1 model {model_id}.")


if __name__ == "__main__":
    main()
```

------
#### [ Inpainting ]

```
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
# SPDX-License-Identifier: Apache-2.0
"""
Shows how to use inpainting to generate an image from a source image with 
the Amazon Titan Image Generator G1 model (on demand).
The example uses a mask prompt to specify the area to inpaint.
"""
import base64
import io
import json
import logging
import boto3
from PIL import Image

from botocore.exceptions import ClientError


class ImageError(Exception):
    "Custom exception for errors returned by Amazon Titan Image Generator G1"

    def __init__(self, message):
        self.message = message


logger = logging.getLogger(__name__)
logging.basicConfig(level=logging.INFO)


def generate_image(model_id, body):
    """
    Generate an image using Amazon Titan Image Generator G1 model on demand.
    Args:
        model_id (str): The model ID to use.
        body (str) : The request body to use.
    Returns:
        image_bytes (bytes): The image generated by the model.
    """

    logger.info(
        "Generating image with Amazon Titan Image Generator G1 model %s", model_id)

    bedrock = boto3.client(service_name='bedrock-runtime')

    accept = "application/json"
    content_type = "application/json"

    response = bedrock.invoke_model(
        body=body, modelId=model_id, accept=accept, contentType=content_type
    )
    response_body = json.loads(response.get("body").read())

    base64_image = response_body.get("images")[0]
    base64_bytes = base64_image.encode('ascii')
    image_bytes = base64.b64decode(base64_bytes)

    finish_reason = response_body.get("error")

    if finish_reason is not None:
        raise ImageError(f"Image generation error. Error is {finish_reason}")

    logger.info(
        "Successfully generated image with Amazon Titan Image Generator G1 model %s", model_id)

    return image_bytes


def main():
    """
    Entrypoint for Amazon Titan Image Generator G1 example.
    """
    try:
        logging.basicConfig(level=logging.INFO,
                            format="%(levelname)s: %(message)s")

        model_id = 'amazon.titan-image-generator-v1'

        # Read image from file and encode it as base64 string.
        with open("/path/to/image", "rb") as image_file:
            input_image = base64.b64encode(image_file.read()).decode('utf8')

        body = json.dumps({
            "taskType": "INPAINTING",
            "inPaintingParams": {
                "text": "Modernize the windows of the house",
                "negativeText": "bad quality, low res",
                "image": input_image,
                "maskPrompt": "windows"
            },
            "imageGenerationConfig": {
                "numberOfImages": 1,
                "height": 512,
                "width": 512,
                "cfgScale": 8.0
            }
        })

        image_bytes = generate_image(model_id=model_id,
                                     body=body)
        image = Image.open(io.BytesIO(image_bytes))
        image.show()

    except ClientError as err:
        message = err.response["Error"]["Message"]
        logger.error("A client error occurred: %s", message)
        print("A client error occured: " +
              format(message))
    except ImageError as err:
        logger.error(err.message)
        print(err.message)

    else:
        print(
            f"Finished generating image with Amazon Titan Image Generator G1 model {model_id}.")


if __name__ == "__main__":
    main()
```

------
#### [ Outpainting ]

```
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
# SPDX-License-Identifier: Apache-2.0
"""
Shows how to use outpainting to generate an image from a source image with 
the Amazon Titan Image Generator G1 model (on demand).
The example uses a mask image to outpaint the original image.
"""
import base64
import io
import json
import logging
import boto3
from PIL import Image

from botocore.exceptions import ClientError


class ImageError(Exception):
    "Custom exception for errors returned by Amazon Titan Image Generator G1"

    def __init__(self, message):
        self.message = message


logger = logging.getLogger(__name__)
logging.basicConfig(level=logging.INFO)


def generate_image(model_id, body):
    """
    Generate an image using Amazon Titan Image Generator G1 model on demand.
    Args:
        model_id (str): The model ID to use.
        body (str) : The request body to use.
    Returns:
        image_bytes (bytes): The image generated by the model.
    """

    logger.info(
        "Generating image with Amazon Titan Image Generator G1 model %s", model_id)

    bedrock = boto3.client(service_name='bedrock-runtime')

    accept = "application/json"
    content_type = "application/json"

    response = bedrock.invoke_model(
        body=body, modelId=model_id, accept=accept, contentType=content_type
    )
    response_body = json.loads(response.get("body").read())

    base64_image = response_body.get("images")[0]
    base64_bytes = base64_image.encode('ascii')
    image_bytes = base64.b64decode(base64_bytes)

    finish_reason = response_body.get("error")

    if finish_reason is not None:
        raise ImageError(f"Image generation error. Error is {finish_reason}")

    logger.info(
        "Successfully generated image with Amazon Titan Image Generator G1 model %s", model_id)

    return image_bytes


def main():
    """
    Entrypoint for Amazon Titan Image Generator G1 example.
    """
    try:
        logging.basicConfig(level=logging.INFO,
                            format="%(levelname)s: %(message)s")

        model_id = 'amazon.titan-image-generator-v1'

        # Read image and mask image from file and encode as base64 strings.
        with open("/path/to/image", "rb") as image_file:
            input_image = base64.b64encode(image_file.read()).decode('utf8')
        with open("/path/to/mask_image", "rb") as mask_image_file:
            input_mask_image = base64.b64encode(
                mask_image_file.read()).decode('utf8')

        body = json.dumps({
            "taskType": "OUTPAINTING",
            "outPaintingParams": {
                "text": "Draw a chocolate chip cookie",
                "negativeText": "bad quality, low res",
                "image": input_image,
                "maskImage": input_mask_image,
                "outPaintingMode": "DEFAULT"
            },
            "imageGenerationConfig": {
                "numberOfImages": 1,
                "height": 512,
                "width": 512,
                "cfgScale": 8.0
            }
        }
        )

        image_bytes = generate_image(model_id=model_id,
                                     body=body)
        image = Image.open(io.BytesIO(image_bytes))
        image.show()

    except ClientError as err:
        message = err.response["Error"]["Message"]
        logger.error("A client error occurred: %s", message)
        print("A client error occured: " +
              format(message))
    except ImageError as err:
        logger.error(err.message)
        print(err.message)

    else:
        print(
            f"Finished generating image with Amazon Titan Image Generator G1 model {model_id}.")


if __name__ == "__main__":
    main()
```

------
#### [ Image variation ]

```
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
# SPDX-License-Identifier: Apache-2.0
"""
Shows how to generate an image variation from a source image with the
Amazon Titan Image Generator G1 model (on demand).
"""
import base64
import io
import json
import logging
import boto3
from PIL import Image

from botocore.exceptions import ClientError


class ImageError(Exception):
    "Custom exception for errors returned by Amazon Titan Image Generator G1"

    def __init__(self, message):
        self.message = message


logger = logging.getLogger(__name__)
logging.basicConfig(level=logging.INFO)


def generate_image(model_id, body):
    """
    Generate an image using Amazon Titan Image Generator G1 model on demand.
    Args:
        model_id (str): The model ID to use.
        body (str) : The request body to use.
    Returns:
        image_bytes (bytes): The image generated by the model.
    """

    logger.info(
        "Generating image with Amazon Titan Image Generator G1 model %s", model_id)

    bedrock = boto3.client(service_name='bedrock-runtime')

    accept = "application/json"
    content_type = "application/json"

    response = bedrock.invoke_model(
        body=body, modelId=model_id, accept=accept, contentType=content_type
    )
    response_body = json.loads(response.get("body").read())

    base64_image = response_body.get("images")[0]
    base64_bytes = base64_image.encode('ascii')
    image_bytes = base64.b64decode(base64_bytes)

    finish_reason = response_body.get("error")

    if finish_reason is not None:
        raise ImageError(f"Image generation error. Error is {finish_reason}")

    logger.info(
        "Successfully generated image with Amazon Titan Image Generator G1 model %s", model_id)

    return image_bytes


def main():
    """
    Entrypoint for Amazon Titan Image Generator G1 example.
    """
    try:
        logging.basicConfig(level=logging.INFO,
                            format="%(levelname)s: %(message)s")

        model_id = 'amazon.titan-image-generator-v1'

        # Read image from file and encode it as base64 string.
        with open("/path/to/image", "rb") as image_file:
            input_image = base64.b64encode(image_file.read()).decode('utf8')

        body = json.dumps({
            "taskType": "IMAGE_VARIATION",
            "imageVariationParams": {
                "text": "Modernize the house, photo-realistic, 8k, hdr",
                "negativeText": "bad quality, low resolution, cartoon",
                "images": [input_image],
		"similarityStrength": 0.7,  # Range: 0.2 to 1.0
            },
            "imageGenerationConfig": {
                "numberOfImages": 1,
                "height": 512,
                "width": 512,
                "cfgScale": 8.0
            }
        })

        image_bytes = generate_image(model_id=model_id,
                                     body=body)
        image = Image.open(io.BytesIO(image_bytes))
        image.show()

    except ClientError as err:
        message = err.response["Error"]["Message"]
        logger.error("A client error occurred: %s", message)
        print("A client error occured: " +
              format(message))
    except ImageError as err:
        logger.error(err.message)
        print(err.message)

    else:
        print(
            f"Finished generating image with Amazon Titan Image Generator G1 model {model_id}.")


if __name__ == "__main__":
    main()
```

------
#### [ Image conditioning (V2 only) ]

```
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
# SPDX-License-Identifier: Apache-2.0
"""
Shows how to generate image conditioning from a source image with the
Amazon Titan Image Generator G1 V2 model (on demand).
"""
import base64
import io
import json
import logging
import boto3
from PIL import Image

from botocore.exceptions import ClientError


class ImageError(Exception):
    "Custom exception for errors returned by Amazon Titan Image Generator V2"

    def __init__(self, message):
        self.message = message


logger = logging.getLogger(__name__)
logging.basicConfig(level=logging.INFO)


def generate_image(model_id, body):
    """
    Generate an image using Amazon Titan Image Generator V2 model on demand.
    Args:
        model_id (str): The model ID to use.
        body (str) : The request body to use.
    Returns:
        image_bytes (bytes): The image generated by the model.
    """

    logger.info(
        "Generating image with Amazon Titan Image Generator V2 model %s", model_id)

    bedrock = boto3.client(service_name='bedrock-runtime')

    accept = "application/json"
    content_type = "application/json"

    response = bedrock.invoke_model(
        body=body, modelId=model_id, accept=accept, contentType=content_type
    )
    response_body = json.loads(response.get("body").read())

    base64_image = response_body.get("images")[0]
    base64_bytes = base64_image.encode('ascii')
    image_bytes = base64.b64decode(base64_bytes)

    finish_reason = response_body.get("error")

    if finish_reason is not None:
        raise ImageError(f"Image generation error. Error is {finish_reason}")

    logger.info(
        "Successfully generated image with Amazon Titan Image Generator V2 model %s", model_id)

    return image_bytes


def main():
    """
    Entrypoint for Amazon Titan Image Generator V2 example.
    """
    try:
        logging.basicConfig(level=logging.INFO,
                            format="%(levelname)s: %(message)s")

        model_id = 'amazon.titan-image-generator-v2:0'

        # Read image from file and encode it as base64 string.
        with open("/path/to/image", "rb") as image_file:
            input_image = base64.b64encode(image_file.read()).decode('utf8')

        body = json.dumps({
            "taskType": "TEXT_IMAGE",
            "textToImageParams": {
                "text": "A robot playing soccer, anime cartoon style",
                "negativeText": "bad quality, low res",
                "conditionImage": input_image,
                "controlMode": "CANNY_EDGE"
            },
            "imageGenerationConfig": {
                "numberOfImages": 1,
                "height": 512,
                "width": 512,
                "cfgScale": 8.0
            }
        })

        image_bytes = generate_image(model_id=model_id,
                                     body=body)
        image = Image.open(io.BytesIO(image_bytes))
        image.show()

    except ClientError as err:
        message = err.response["Error"]["Message"]
        logger.error("A client error occurred: %s", message)
        print("A client error occured: " +
              format(message))
    except ImageError as err:
        logger.error(err.message)
        print(err.message)

    else:
        print(
            f"Finished generating image with Amazon Titan Image Generator V2 model {model_id}.")


if __name__ == "__main__":
    main()
```

------
#### [ Color guided content (V2 only) ]

```
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
# SPDX-License-Identifier: Apache-2.0
"""
Shows how to generate an image from a source image color palette with the
Amazon Titan Image Generator G1 V2 model (on demand).
"""
import base64
import io
import json
import logging
import boto3
from PIL import Image

from botocore.exceptions import ClientError


class ImageError(Exception):
    "Custom exception for errors returned by Amazon Titan Image Generator V2"

    def __init__(self, message):
        self.message = message


logger = logging.getLogger(__name__)
logging.basicConfig(level=logging.INFO)


def generate_image(model_id, body):
    """
    Generate an image using Amazon Titan Image Generator V2 model on demand.
    Args:
        model_id (str): The model ID to use.
        body (str) : The request body to use.
    Returns:
        image_bytes (bytes): The image generated by the model.
    """

    logger.info(
        "Generating image with Amazon Titan Image Generator V2 model %s", model_id)

    bedrock = boto3.client(service_name='bedrock-runtime')

    accept = "application/json"
    content_type = "application/json"

    response = bedrock.invoke_model(
        body=body, modelId=model_id, accept=accept, contentType=content_type
    )
    response_body = json.loads(response.get("body").read())

    base64_image = response_body.get("images")[0]
    base64_bytes = base64_image.encode('ascii')
    image_bytes = base64.b64decode(base64_bytes)

    finish_reason = response_body.get("error")

    if finish_reason is not None:
        raise ImageError(f"Image generation error. Error is {finish_reason}")

    logger.info(
        "Successfully generated image with Amazon Titan Image Generator V2 model %s", model_id)

    return image_bytes


def main():
    """
    Entrypoint for Amazon Titan Image Generator V2 example.
    """
    try:
        logging.basicConfig(level=logging.INFO,
                            format="%(levelname)s: %(message)s")

        model_id = 'amazon.titan-image-generator-v2:0'

        # Read image from file and encode it as base64 string.
        with open("/path/to/image", "rb") as image_file:
            input_image = base64.b64encode(image_file.read()).decode('utf8')

        body = json.dumps({
            "taskType": "COLOR_GUIDED_GENERATION",
            "colorGuidedGenerationParams": {
                "text": "digital painting of a girl, dreamy and ethereal, pink eyes, peaceful expression, ornate frilly dress, fantasy, intricate, elegant, rainbow bubbles, highly detailed, digital painting, artstation, concept art, smooth, sharp focus, illustration",
                "negativeText": "bad quality, low res",
                "referenceImage": input_image,
                "colors": ["#ff8080", "#ffb280", "#ffe680", "#ffe680"]
            },
            "imageGenerationConfig": {
                "numberOfImages": 1,
                "height": 512,
                "width": 512,
                "cfgScale": 8.0
            }
        })

        image_bytes = generate_image(model_id=model_id,
                                     body=body)
        image = Image.open(io.BytesIO(image_bytes))
        image.show()

    except ClientError as err:
        message = err.response["Error"]["Message"]
        logger.error("A client error occurred: %s", message)
        print("A client error occured: " +
              format(message))
    except ImageError as err:
        logger.error(err.message)
        print(err.message)

    else:
        print(
            f"Finished generating image with Amazon Titan Image Generator V2 model {model_id}.")


if __name__ == "__main__":
    main()
```

------
#### [ Background removal (V2 only) ]

```
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
# SPDX-License-Identifier: Apache-2.0
"""
Shows how to generate an image with background removal with the
Amazon Titan Image Generator G1 V2 model (on demand).
"""
import base64
import io
import json
import logging
import boto3
from PIL import Image

from botocore.exceptions import ClientError


class ImageError(Exception):
    "Custom exception for errors returned by Amazon Titan Image Generator V2"

    def __init__(self, message):
        self.message = message


logger = logging.getLogger(__name__)
logging.basicConfig(level=logging.INFO)


def generate_image(model_id, body):
    """
    Generate an image using Amazon Titan Image Generator V2 model on demand.
    Args:
        model_id (str): The model ID to use.
        body (str) : The request body to use.
    Returns:
        image_bytes (bytes): The image generated by the model.
    """

    logger.info(
        "Generating image with Amazon Titan Image Generator V2 model %s", model_id)

    bedrock = boto3.client(service_name='bedrock-runtime')

    accept = "application/json"
    content_type = "application/json"

    response = bedrock.invoke_model(
        body=body, modelId=model_id, accept=accept, contentType=content_type
    )
    response_body = json.loads(response.get("body").read())

    base64_image = response_body.get("images")[0]
    base64_bytes = base64_image.encode('ascii')
    image_bytes = base64.b64decode(base64_bytes)

    finish_reason = response_body.get("error")

    if finish_reason is not None:
        raise ImageError(f"Image generation error. Error is {finish_reason}")

    logger.info(
        "Successfully generated image with Amazon Titan Image Generator V2 model %s", model_id)

    return image_bytes


def main():
    """
    Entrypoint for Amazon Titan Image Generator V2 example.
    """
    try:
        logging.basicConfig(level=logging.INFO,
                            format="%(levelname)s: %(message)s")

        model_id = 'amazon.titan-image-generator-v2:0'

        # Read image from file and encode it as base64 string.
        with open("/path/to/image", "rb") as image_file:
            input_image = base64.b64encode(image_file.read()).decode('utf8')

        body = json.dumps({
            "taskType": "BACKGROUND_REMOVAL",
            "backgroundRemovalParams": {
                "image": input_image,
            }
        })

        image_bytes = generate_image(model_id=model_id,
                                     body=body)
        image = Image.open(io.BytesIO(image_bytes))
        image.show()

    except ClientError as err:
        message = err.response["Error"]["Message"]
        logger.error("A client error occurred: %s", message)
        print("A client error occured: " +
              format(message))
    except ImageError as err:
        logger.error(err.message)
        print(err.message)

    else:
        print(
            f"Finished generating image with Amazon Titan Image Generator V2 model {model_id}.")


if __name__ == "__main__":
    main()
```

------

# Amazon Titan Embeddings G1 - Text
<a name="model-parameters-titan-embed-text"></a>

Titan Embeddings G1 - Text 不支援使用推論參數。以下各節詳細說明請求和回應格式，並提供程式碼範例。

**Topics**
+ [請求與回應](#model-parameters-titan-embed-text-request-response)
+ [範例程式碼](#api-inference-examples-titan-embed-text)

## 請求與回應
<a name="model-parameters-titan-embed-text-request-response"></a>

請求內文在 [InvokeModel](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InvokeModel.html) 請求的 `body` 欄位中傳遞。

------
#### [ V2 Request ]

inputText 參數為必要。標準化和維度參數是選用的。
+ inputText – 輸入文字以轉換為嵌入。
+ normalize – (選用) 指示是否標準化輸出嵌入的旗標。預設為 true。
+ dimensions – (選用) 輸出嵌入應具有的維度數量。接受下列值：1024 (預設值)、512、256。
+ embeddingTypes – (選用) 接受包含「浮點數」、「二進位」或兩者的清單。預設為 `float`。

```
{
    "inputText": string,
    "dimensions": int,
    "normalize": boolean,
    "embeddingTypes": list
}
```

------
#### [ V2 Response ]

這些欄位如下所述。
+ embedding – 代表您所提供輸入的嵌入向量陣列。這一律為類型 `float`。
+ inputTextTokenCount – 輸入中的字符數量。
+ embeddingsByType – 嵌入清單的字典或映射。根據輸入，列出「浮點數」、「二進位」或兩者。
  + 範例：`"embeddingsByType": {"binary": [int,..], "float": [float,...]}`
  + 此欄位一律會出現。即使您未在輸入中指定 `embeddingTypes`，仍然會有「浮點數」。範例：`"embeddingsByType": {"float": [float,...]}`

```
{
    "embedding": [float, float, ...],
    "inputTextTokenCount": int,
    "embeddingsByType": {"binary": [int,..], "float": [float,...]}
}
```

------
#### [ G1 Request ]

唯一可用的欄位是 `inputText`，您可以在其中包含要轉換為嵌入的文字。

```
{
    "inputText": string
}
```

------
#### [ G1 Response ]

回應的 `body` 包含下列欄位。

```
{
    "embedding": [float, float, ...],
    "inputTextTokenCount": int
}
```

這些欄位如下所述。
+ **embedding** – 代表您所提供輸入的嵌入向量陣列。
+ **inputTextTokenCount** – 輸入中的字符數量。

------

## 範例程式碼
<a name="api-inference-examples-titan-embed-text"></a>

下列範例示範如何呼叫 Amazon Titan Embeddings 模型來產生嵌入。選取對應至您正在使用之模型的索引標籤：

------
#### [ Amazon Titan Embeddings G1 - Text ]

```
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
# SPDX-License-Identifier: Apache-2.0
"""
Shows how to generate an embedding with the Amazon Titan Embeddings G1 - Text model (on demand).
"""

import json
import logging
import boto3


from botocore.exceptions import ClientError


logger = logging.getLogger(__name__)
logging.basicConfig(level=logging.INFO)


def generate_embedding(model_id, body):
    """
    Generate an embedding with the vector representation of a text input using Amazon Titan Embeddings G1 - Text on demand.
    Args:
        model_id (str): The model ID to use.
        body (str) : The request body to use.
    Returns:
        response (JSON): The embedding created by the model and the number of input tokens.
    """

    logger.info("Generating an embedding with Amazon Titan Embeddings G1 - Text model %s", model_id)

    bedrock = boto3.client(service_name='bedrock-runtime')

    accept = "application/json"
    content_type = "application/json"

    response = bedrock.invoke_model(
        body=body, modelId=model_id, accept=accept, contentType=content_type
    )

    response_body = json.loads(response.get('body').read())

    return response_body


def main():
    """
    Entrypoint for Amazon Titan Embeddings G1 - Text example.
    """

    logging.basicConfig(level=logging.INFO,
                        format="%(levelname)s: %(message)s")

    model_id = "amazon.titan-embed-text-v1"
    input_text = "What are the different services that you offer?"


    # Create request body.
    body = json.dumps({
        "inputText": input_text,
    })


    try:

        response = generate_embedding(model_id, body)

        print(f"Generated an embedding: {response['embedding']}")
        print(f"Input Token count:  {response['inputTextTokenCount']}")

    except ClientError as err:
        message = err.response["Error"]["Message"]
        logger.error("A client error occurred: %s", message)
        print("A client error occured: " +
              format(message))

    else:
        print(f"Finished generating an embedding with Amazon Titan Embeddings G1 - Text model {model_id}.")


if __name__ == "__main__":
    main()
```

------
#### [ Amazon Titan 文本嵌入 V2 ]

使用 Titan Text Embeddings V2 時，如果 `embeddingTypes` 只包含 `binary`，則 `embedding` 欄位不會在回應中。

```
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
# SPDX-License-Identifier: Apache-2.0
"""
Shows how to generate an embedding with the Amazon Titan Text Embeddings V2 Model
"""

import json
import logging
import boto3


from botocore.exceptions import ClientError


logger = logging.getLogger(__name__)
logging.basicConfig(level=logging.INFO)


def generate_embedding(model_id, body):
    """
    Generate an embedding with the vector representation of a text input using Amazon Titan Text Embeddings G1 on demand.
    Args:
        model_id (str): The model ID to use.
        body (str) : The request body to use.
    Returns:
        response (JSON): The embedding created by the model and the number of input tokens.
    """

    logger.info("Generating an embedding with Amazon Titan Text Embeddings V2 model %s", model_id)

    bedrock = boto3.client(service_name='bedrock-runtime')

    accept = "application/json"
    content_type = "application/json"

    response = bedrock.invoke_model(
        body=body, modelId=model_id, accept=accept, contentType=content_type
    )

    response_body = json.loads(response.get('body').read())

    return response_body


def main():
    """
    Entrypoint for Amazon Titan Embeddings V2 - Text example.
    """

    logging.basicConfig(level=logging.INFO,
                        format="%(levelname)s: %(message)s")

    model_id = "amazon.titan-embed-text-v2:0"
    input_text = "What are the different services that you offer?"


    # Create request body.
    body = json.dumps({
        "inputText": input_text,
        "embeddingTypes": ["binary"]
    })


    try:

        response = generate_embedding(model_id, body)

        print(f"Generated an embedding: {response['embeddingsByType']['binary']}") # returns binary embedding
        print(f"Input text: {input_text}")
        print(f"Input Token count:  {response['inputTextTokenCount']}")

    except ClientError as err:
        message = err.response["Error"]["Message"]
        logger.error("A client error occurred: %s", message)
        print("A client error occured: " +
              format(message))

    else:
        print(f"Finished generating an embedding with Amazon Titan Text Embeddings V2 model {model_id}.")


if __name__ == "__main__":
    main()
```

------

# Amazon Titan Multimodal Embeddings G1
<a name="model-parameters-titan-embed-mm"></a>

本節提供使用 Amazon Titan Multimodal Embeddings G1 的請求和回應內文格式與程式碼範例。

**Topics**
+ [請求與回應](#model-parameters-titan-embed-mm-request-response)
+ [範例程式碼](#api-inference-examples-titan-embed-mm)

## 請求與回應
<a name="model-parameters-titan-embed-mm-request-response"></a>

請求內文在 [InvokeModel](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InvokeModel.html) 請求的 `body` 欄位中傳遞。

------
#### [ Request ]

Amazon Titan Multimodal Embeddings G1 的請求內文包含下列欄位。

```
{
    "inputText": string,
    "inputImage": base64-encoded string,
    "embeddingConfig": {
        "outputEmbeddingLength": 256 | 384 | 1024
    }
}
```

至少需要下列其中一個欄位。同時包含兩者可產生嵌入向量，該向量會將產生的文本嵌入和影像嵌入向量加以平均。
+ **inputText** – 輸入要轉換為嵌入的文字。
+ **inputImage** – 以 base64 將您要轉換為嵌入的影像編碼，並在此欄位中輸入字串。有關如何將影像編碼為 base64 並解碼 base64 編碼的字串，再將其轉換為影像的範例，請參閱[程式碼範例](#api-inference-examples-titan-embed-mm)。

下列欄位為選用。
+ **embeddingConfig** – 包含 `outputEmbeddingLength` 欄位，您可以在其中為輸出嵌入向量指定下列其中一個長度。
  + 256
  + 384
  + 1024 (預設)

------
#### [ Response ]

回應的 `body` 包含下列欄位。

```
{
    "embedding": [float, float, ...],
    "inputTextTokenCount": int,
    "message": string
}
```

這些欄位如下所述。
+ **embedding** – 代表您所提供輸入的嵌入向量陣列。
+ **inputTextTokenCount** – 文字輸入中的字符數量。
+ **message** – 指定產生期間發生的任何錯誤。

------

## 範例程式碼
<a name="api-inference-examples-titan-embed-mm"></a>

下列範例展示如何在 Python SDK 中以隨需輸送量調用 Amazon Titan Multimodal Embeddings G1 模型。選取標籤以檢視每個使用案例的範例。

------
#### [ Text embeddings ]

此範例展示如何呼叫 Amazon Titan Multimodal Embeddings G1 模型來產生文本嵌入。

```
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
# SPDX-License-Identifier: Apache-2.0
"""
Shows how to generate embeddings from text with the Amazon Titan Multimodal Embeddings G1 model (on demand).
"""

import json
import logging
import boto3


from botocore.exceptions import ClientError

class EmbedError(Exception):
    "Custom exception for errors returned by Amazon Titan Multimodal Embeddings G1"

    def __init__(self, message):
        self.message = message

logger = logging.getLogger(__name__)
logging.basicConfig(level=logging.INFO)


def generate_embeddings(model_id, body):
    """
    Generate a vector of embeddings for a text input using Amazon Titan Multimodal Embeddings G1 on demand.
    Args:
        model_id (str): The model ID to use.
        body (str) : The request body to use.
    Returns:
        response (JSON): The embeddings that the model generated, token information, and the
        reason the model stopped generating embeddings.
    """

    logger.info("Generating embeddings with Amazon Titan Multimodal Embeddings G1 model %s", model_id)

    bedrock = boto3.client(service_name='bedrock-runtime')

    accept = "application/json"
    content_type = "application/json"

    response = bedrock.invoke_model(
        body=body, modelId=model_id, accept=accept, contentType=content_type
    )

    response_body = json.loads(response.get('body').read())

    finish_reason = response_body.get("message")

    if finish_reason is not None:
        raise EmbedError(f"Embeddings generation error: {finish_reason}")

    return response_body


def main():
    """
    Entrypoint for Amazon Titan Multimodal Embeddings G1 example.
    """

    logging.basicConfig(level=logging.INFO,
                        format="%(levelname)s: %(message)s")

    model_id = "amazon.titan-embed-image-v1"
    input_text = "What are the different services that you offer?"
    output_embedding_length = 256

    # Create request body.
    body = json.dumps({
        "inputText": input_text,
        "embeddingConfig": {
            "outputEmbeddingLength": output_embedding_length
        }
    })


    try:

        response = generate_embeddings(model_id, body)

        print(f"Generated text embeddings of length {output_embedding_length}: {response['embedding']}")
        print(f"Input text token count:  {response['inputTextTokenCount']}")

    except ClientError as err:
        message = err.response["Error"]["Message"]
        logger.error("A client error occurred: %s", message)
        print("A client error occured: " +
              format(message))
        
    except EmbedError as err:
        logger.error(err.message)
        print(err.message)

    else:
        print(f"Finished generating text embeddings with Amazon Titan Multimodal Embeddings G1 model {model_id}.")


if __name__ == "__main__":
    main()
```

------
#### [ Image embeddings ]

此範例展示如何呼叫 Amazon Titan Multimodal Embeddings G1 模型來產生影像嵌入。

```
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
# SPDX-License-Identifier: Apache-2.0
"""
Shows how to generate embeddings from an image with the Amazon Titan Multimodal Embeddings G1 model (on demand).
"""

import base64
import json
import logging
import boto3

from botocore.exceptions import ClientError

class EmbedError(Exception):
    "Custom exception for errors returned by Amazon Titan Multimodal Embeddings G1"

    def __init__(self, message):
        self.message = message

logger = logging.getLogger(__name__)
logging.basicConfig(level=logging.INFO)


def generate_embeddings(model_id, body):
    """
    Generate a vector of embeddings for an image input using Amazon Titan Multimodal Embeddings G1 on demand.
    Args:
        model_id (str): The model ID to use.
        body (str) : The request body to use.
    Returns:
        response (JSON): The embeddings that the model generated, token information, and the
        reason the model stopped generating embeddings.
    """

    logger.info("Generating embeddings with Amazon Titan Multimodal Embeddings G1 model %s", model_id)

    bedrock = boto3.client(service_name='bedrock-runtime')

    accept = "application/json"
    content_type = "application/json"

    response = bedrock.invoke_model(
        body=body, modelId=model_id, accept=accept, contentType=content_type
    )

    response_body = json.loads(response.get('body').read())

    finish_reason = response_body.get("message")

    if finish_reason is not None:
        raise EmbedError(f"Embeddings generation error: {finish_reason}")

    return response_body


def main():
    """
    Entrypoint for Amazon Titan Multimodal Embeddings G1 example.
    """

    logging.basicConfig(level=logging.INFO,
                        format="%(levelname)s: %(message)s")

    # Read image from file and encode it as base64 string.
    with open("/path/to/image", "rb") as image_file:
        input_image = base64.b64encode(image_file.read()).decode('utf8')

    model_id = 'amazon.titan-embed-image-v1'
    output_embedding_length = 256

    # Create request body.
    body = json.dumps({
        "inputImage": input_image,
        "embeddingConfig": {
            "outputEmbeddingLength": output_embedding_length
        }
    })


    try:

        response = generate_embeddings(model_id, body)

        print(f"Generated image embeddings of length {output_embedding_length}: {response['embedding']}")

    except ClientError as err:
        message = err.response["Error"]["Message"]
        logger.error("A client error occurred: %s", message)
        print("A client error occured: " +
              format(message))
        
    except EmbedError as err:
        logger.error(err.message)
        print(err.message)

    else:
        print(f"Finished generating image embeddings with Amazon Titan Multimodal Embeddings G1 model {model_id}.")


if __name__ == "__main__":
    main()
```

------
#### [ Text and image embeddings ]

此範例示範如何呼叫 Amazon Titan Multimodal Embeddings G1 模型，以從合併的文字和影像輸入產生嵌入。產生的向量是產生的文本嵌入向量和影像嵌入向量的平均值。

```
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
# SPDX-License-Identifier: Apache-2.0
"""
Shows how to generate embeddings from an image and accompanying text with the Amazon Titan Multimodal Embeddings G1 model (on demand).
"""

import base64
import json
import logging
import boto3

from botocore.exceptions import ClientError

class EmbedError(Exception):
    "Custom exception for errors returned by Amazon Titan Multimodal Embeddings G1"

    def __init__(self, message):
        self.message = message

logger = logging.getLogger(__name__)
logging.basicConfig(level=logging.INFO)


def generate_embeddings(model_id, body):
    """
    Generate a vector of embeddings for a combined text and image input using Amazon Titan Multimodal Embeddings G1 on demand.
    Args:
        model_id (str): The model ID to use.
        body (str) : The request body to use.
    Returns:
        response (JSON): The embeddings that the model generated, token information, and the
        reason the model stopped generating embeddings.
    """

    logger.info("Generating embeddings with Amazon Titan Multimodal Embeddings G1 model %s", model_id)

    bedrock = boto3.client(service_name='bedrock-runtime')

    accept = "application/json"
    content_type = "application/json"

    response = bedrock.invoke_model(
        body=body, modelId=model_id, accept=accept, contentType=content_type
    )

    response_body = json.loads(response.get('body').read())

    finish_reason = response_body.get("message")

    if finish_reason is not None:
        raise EmbedError(f"Embeddings generation error: {finish_reason}")

    return response_body


def main():
    """
    Entrypoint for Amazon Titan Multimodal Embeddings G1 example.
    """

    logging.basicConfig(level=logging.INFO,
                        format="%(levelname)s: %(message)s")

    model_id = "amazon.titan-embed-image-v1"
    input_text = "A family eating dinner"
    # Read image from file and encode it as base64 string.
    with open("/path/to/image", "rb") as image_file:
        input_image = base64.b64encode(image_file.read()).decode('utf8')
    output_embedding_length = 256

    # Create request body.
    body = json.dumps({
        "inputText": input_text,
        "inputImage": input_image,
        "embeddingConfig": {
            "outputEmbeddingLength": output_embedding_length
        }
    })


    try:

        response = generate_embeddings(model_id, body)

        print(f"Generated embeddings of length {output_embedding_length}: {response['embedding']}")
        print(f"Input text token count:  {response['inputTextTokenCount']}")

    except ClientError as err:
        message = err.response["Error"]["Message"]
        logger.error("A client error occurred: %s", message)
        print("A client error occured: " +
              format(message))
        
    except EmbedError as err:
        logger.error(err.message)
        print(err.message)

    else:
        print(f"Finished generating embeddings with Amazon Titan Multimodal Embeddings G1 model {model_id}.")


if __name__ == "__main__":
    main()
```

------