

本文為英文版的機器翻譯版本，如內容有任何歧義或不一致之處，概以英文版為準。

# 使用 SageMaker SDK 部署已編譯的模型
<a name="neo-deployment-hosting-services-sdk"></a>

如果模型是使用 或 Amazon SageMaker AI 主控台編譯的 適用於 Python (Boto3) 的 AWS SDK AWS CLI，您必須滿足[先決條件](https://docs.aws.amazon.com//sagemaker/latest/dg/neo-deployment-hosting-services-prerequisites)區段。請遵循下列其中一個使用案例，根據您編譯模型的方式，部署使用 SageMaker Neo 編譯的模型。

**Topics**
+ [如果您使用 SageMaker SDK 編譯模型](#neo-deployment-hosting-services-sdk-deploy-sm-sdk)
+ [如果您使用 MXNet 或 PyTorch 編譯模型](#neo-deployment-hosting-services-sdk-deploy-sm-boto3)
+ [如果您使用 Boto3、SageMaker 主控台或 CLI for TensorFlow 編譯模型](#neo-deployment-hosting-services-sdk-deploy-sm-boto3-tensorflow)

## 如果您使用 SageMaker SDK 編譯模型
<a name="neo-deployment-hosting-services-sdk-deploy-sm-sdk"></a>

已編譯模型的 [sagemaker.Model](https://sagemaker.readthedocs.io/en/stable/api/inference/model.html?highlight=sagemaker.Model) 物件控點提供 [deploy()](https://sagemaker.readthedocs.io/en/stable/api/inference/model.html?highlight=sagemaker.Model#sagemaker.model.Model.deploy) 函式，讓您建立服務推論請求的端點。此函式可讓您設定用於端點的執行個體數量和類型。您必須選擇您為其編譯模型的執行個體。例如，在[編譯模型 (Amazon SageMaker SDK)](https://docs.aws.amazon.com/sagemaker/latest/dg/neo-job-compilation-sagemaker-sdk.html) 區段中編譯的工作，這是 `ml_c5`。

```
predictor = compiled_model.deploy(initial_instance_count = 1, instance_type = 'ml.c5.4xlarge')

# Print the name of newly created endpoint
print(predictor.endpoint_name)
```

## 如果您使用 MXNet 或 PyTorch 編譯模型
<a name="neo-deployment-hosting-services-sdk-deploy-sm-boto3"></a>

建立 SageMaker AI 模型，並使用特定架構模型 API 下的 deploy() API 進行部署。若為 MXNet 是 [MXNetModel](https://sagemaker.readthedocs.io/en/stable/frameworks/mxnet/sagemaker.mxnet.html?highlight=MXNetModel#mxnet-model)，若為 PyTorch 則是 [ PyTorchModel](https://sagemaker.readthedocs.io/en/stable/frameworks/pytorch/sagemaker.pytorch.html?highlight=PyTorchModel#sagemaker.pytorch.model.PyTorchModel)。建立和部署 SageMaker AI 模型時，您必須將 `MMS_DEFAULT_RESPONSE_TIMEOUT` 環境變數設定為 `500`，並將 `entry_point` 參數指定為推論指令碼 (`inference.py`)，並將 `source_dir` 參數指定為推論指令碼的目錄位置 (`code`)。若要準備推論指令碼 (`inference.py`)，請遵循先決條件步驟。

下列範例說明如何使用 SageMaker AI SDK for Python，以這些函式部署已編譯的模型：

------
#### [ MXNet ]

```
from sagemaker.mxnet import MXNetModel

# Create SageMaker model and deploy an endpoint
sm_mxnet_compiled_model = MXNetModel(
    model_data='insert S3 path of compiled MXNet model archive',
    role='AmazonSageMaker-ExecutionRole',
    entry_point='inference.py',
    source_dir='code',
    framework_version='1.8.0',
    py_version='py3',
    image_uri='insert appropriate ECR Image URI for MXNet',
    env={'MMS_DEFAULT_RESPONSE_TIMEOUT': '500'},
)

# Replace the example instance_type below to your preferred instance_type
predictor = sm_mxnet_compiled_model.deploy(initial_instance_count = 1, instance_type = 'ml.p3.2xlarge')

# Print the name of newly created endpoint
print(predictor.endpoint_name)
```

------
#### [ PyTorch 1.4 and Older ]

```
from sagemaker.pytorch import PyTorchModel

# Create SageMaker model and deploy an endpoint
sm_pytorch_compiled_model = PyTorchModel(
    model_data='insert S3 path of compiled PyTorch model archive',
    role='AmazonSageMaker-ExecutionRole',
    entry_point='inference.py',
    source_dir='code',
    framework_version='1.4.0',
    py_version='py3',
    image_uri='insert appropriate ECR Image URI for PyTorch',
    env={'MMS_DEFAULT_RESPONSE_TIMEOUT': '500'},
)

# Replace the example instance_type below to your preferred instance_type
predictor = sm_pytorch_compiled_model.deploy(initial_instance_count = 1, instance_type = 'ml.p3.2xlarge')

# Print the name of newly created endpoint
print(predictor.endpoint_name)
```

------
#### [ PyTorch 1.5 and Newer ]

```
from sagemaker.pytorch import PyTorchModel

# Create SageMaker model and deploy an endpoint
sm_pytorch_compiled_model = PyTorchModel(
    model_data='insert S3 path of compiled PyTorch model archive',
    role='AmazonSageMaker-ExecutionRole',
    entry_point='inference.py',
    source_dir='code',
    framework_version='1.5',
    py_version='py3',
    image_uri='insert appropriate ECR Image URI for PyTorch',
)

# Replace the example instance_type below to your preferred instance_type
predictor = sm_pytorch_compiled_model.deploy(initial_instance_count = 1, instance_type = 'ml.p3.2xlarge')

# Print the name of newly created endpoint
print(predictor.endpoint_name)
```

------

**注意**  
`AmazonSageMakerFullAccess` 和 `AmazonS3ReadOnlyAccess` 政策必須連接到 `AmazonSageMaker-ExecutionRole` IAM 角色。

## 如果您使用 Boto3、SageMaker 主控台或 CLI for TensorFlow 編譯模型
<a name="neo-deployment-hosting-services-sdk-deploy-sm-boto3-tensorflow"></a>

建構一個 `TensorFlowModel` 物件，然後呼叫部署：

```
role='AmazonSageMaker-ExecutionRole'
model_path='S3 path for model file'
framework_image='inference container arn'
tf_model = TensorFlowModel(model_data=model_path,
                framework_version='1.15.3',
                role=role, 
                image_uri=framework_image)
instance_type='ml.c5.xlarge'
predictor = tf_model.deploy(instance_type=instance_type,
                    initial_instance_count=1)
```

若需更多資訊，請參閱[直接從模型成品部署](https://sagemaker.readthedocs.io/en/stable/frameworks/tensorflow/deploying_tensorflow_serving.html#deploying-directly-from-model-artifacts)。

您可以從[此清單](https://docs.aws.amazon.com//sagemaker/latest/dg/neo-deployment-hosting-services-container-images.html)選取符合您需求的 Docker 映像 Amazon ECR URI。

如需關於如何建構 `TensorFlowModel` 物件的更多資訊，請參閱 [SageMaker SDK](https://sagemaker.readthedocs.io/en/stable/frameworks/tensorflow/sagemaker.tensorflow.html#tensorflow-serving-model)。

**注意**  
如果您在 GPU 部署模型，第一個推論請求的延遲可能很高。這是因為在第一個推論請求上建立了最佳化的運算核心。我們建議您製作推論請求的暖機檔案，並將其與模型檔案一起儲存，然後再將其傳送至 TFX。這就是所謂的 “暖機” 模型。

下列程式碼片段示範如何在[先決條件](https://docs.aws.amazon.com//sagemaker/latest/dg/neo-deployment-hosting-services-prerequisites)區段中產生映像分類範例的暖機檔案：

```
import tensorflow as tf
from tensorflow_serving.apis import classification_pb2
from tensorflow_serving.apis import inference_pb2
from tensorflow_serving.apis import model_pb2
from tensorflow_serving.apis import predict_pb2
from tensorflow_serving.apis import prediction_log_pb2
from tensorflow_serving.apis import regression_pb2
import numpy as np

with tf.python_io.TFRecordWriter("tf_serving_warmup_requests") as writer:       
    img = np.random.uniform(0, 1, size=[224, 224, 3]).astype(np.float32)
    img = np.expand_dims(img, axis=0)
    test_data = np.repeat(img, 1, axis=0)
    request = predict_pb2.PredictRequest()
    request.model_spec.name = 'compiled_models'
    request.model_spec.signature_name = 'serving_default'
    request.inputs['Placeholder:0'].CopyFrom(tf.compat.v1.make_tensor_proto(test_data, shape=test_data.shape, dtype=tf.float32))
    log = prediction_log_pb2.PredictionLog(
    predict_log=prediction_log_pb2.PredictLog(request=request))
    writer.write(log.SerializeToString())
```

如需有關如何 “暖機” 模型的更多資訊，請參閱 [TensorFlow TFX 網頁](https://www.tensorflow.org/tfx/serving/saved_model_warmup)。