

# Machine learning products in AWS Marketplace
<a name="machine-learning-products"></a>

As an AWS Marketplace seller, you can create machine learning (ML) algorithms and models that your buyers can deploy in AWS. This topic provides information about the Amazon SageMaker AI product types listed in AWS Marketplace.

There are two types of SageMaker AI products listed in AWS Marketplace: 

**Model package**  
 A pre-trained model for making predictions that does not require any further training by the buyer. 

**Algorithm**  
 A model that requires the buyer to supply training data before it makes predictions. The training algorithm is included. 

These products are available to buyers through the Amazon SageMaker AI console or AWS Marketplace. Buyers can review product descriptions, documentation, customer reviews, pricing, and support information. When they subscribe to either a model package product or algorithm product, it’s added to their product list on the SageMaker AI console. Buyers can also use AWS SDKs, the AWS Command Line Interface (AWS CLI), or the SageMaker AI console to create a fully managed REST inference endpoint or perform inference on batches of data. 

 For support with creating machine learning products with Amazon SageMaker AI, contact the [AWS Marketplace Seller Operations](https://aws.amazon.com/marketplace/management/contact-us/) team. 

# Understanding machine learning products
<a name="ml-overview"></a>

 AWS Marketplace supports two machine learning product types, using Amazon SageMaker AI. Both types, the model package products and the algorithm products, produce a deployable inference model for making predictions.

## SageMaker AI model package
<a name="ml-amazon-sagemaker-model-package"></a>

 An [ Amazon SageMaker AI model package](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-marketplace.html#sagemaker-mkt-model-package) product contains a pre-trained model. Pre-trained models can be deployed in SageMaker AI to make inferences or predictions in real time or in batches. This product contains a trained inference component with model artifacts, if any. As a seller, you can train a model using SageMaker AI or bring your own model. 

## SageMaker AI algorithm
<a name="ml-amazon-sagemaker-algorithm"></a>

 Buyers can use a [SageMaker AI algorithm](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-marketplace.html#sagemaker-mkt-algorithm) product to perform complete machine learning workloads. An algorithm product has two logical components: training and inference. In SageMaker AI, buyers use their own datasets to create a training job with your training component. When the algorithm in your training component completes, it generates the model artifacts of the machine learning model. SageMaker AI saves the model artifacts in the buyers’ Amazon Simple Storage Service (Amazon S3) bucket. In SageMaker AI, buyers can then deploy your inference component along with those generated model artifacts to perform inference (or prediction) in real time or in batches. 

## Deploying an inference model
<a name="ml-deploying-an-inference-model"></a>

 Whether the inference model is created from a model package or an algorithm, there are two methods to deploy them: 
+  **Endpoint** – This method uses SageMaker AI to deploy the model and create an API endpoint. The buyer can use this endpoint as part of their backend service to power their applications. When data is sent to the endpoint, SageMaker AI passes it to the model container and returns the results in an API response. The endpoint and the container continue to run until stopped by the buyer.
**Note**  
 In AWS Marketplace, the endpoint method is referred to as *real-time inference*, and in the SageMaker AI documentation, it is referred to as *hosting services*. For more information, see [Deploy a Model in Amazon SageMaker AI](https://docs.aws.amazon.com/sagemaker/latest/dg/how-it-works-deployment.html). 
+  **Batch transform job** – In this method, a buyer stores datasets for inference in Amazon S3. When the batch transform job starts, SageMaker AI deploys the model, passes data from an S3 bucket to the model’s container, and then returns the results to an Amazon S3 bucket. When the job completes, SageMaker AI stops the job. For more information, see [Use Batch Transform](https://docs.aws.amazon.com/sagemaker/latest/dg/batch-transform.html).
**Note**  
 Both methods are transparent to the model because SageMaker AI passes data to the model and returns results to the buyer. 

# Machine learning product lifecycle
<a name="ml-product-lifecycle"></a>

A machine learning product in AWS Marketplace consists of one or more software versions and associated metadata. Product configuration includes essential properties such as name, description, usage instructions, pricing, categorization, and search keywords. 

## Machine learning product creation process
<a name="ml-publication-process"></a>

 To list a machine learning product in AWS Marketplace, you must complete the following: 

1. [Preparing your product in SageMaker AI](ml-prepare-your-product-in-sagemaker.md)

1. [Listing your product in AWS Marketplace](ml-publishing-your-product-in-aws-marketplace.md)

 Once you have created your machine learning product, you can edit and manage your product. For more information, see [Managing your machine learning products](ml-product-management.md). 

## Machine learning product status
<a name="ml-product-status"></a>

 New products initially have limited visibility, accessible only to allowlisted accounts and the product creator. After testing and validation, you can publish your product to make it available in the AWS Marketplace catalog for all buyers. Products in AWS Marketplace can have the following status values: 


| Status | Definition | 
| --- |--- |
| Staging |  This status indicates an incomplete product for which you're still adding information. After you first save and exit the self-service experience, AWS Marketplace creates an unpublished product containing information from the completed steps. From this status, you can continue to add information or modify submitted details.   | 
| Limited | A product attains this status after it's submitted to AWS Marketplace and passes all validation checks. At this point, the product has a detail page accessible only to your account and allowlisted entities. You can conduct product testing through this detail page. | 
| Public | When you're prepared to make your product visible to buyers for subscription, update the product visibility in the console. Once processed, the product transitions from Limited to Public status. For information about AWS guidelines, see [Requirements and best practices for creating machine learning products](ml-listing-requirements-and-best-practices.md).  | 
| Restricted |  To prevent new users from subscribing to your product, you can restrict it by updating the visibility settings. A Restricted status allows existing allowlisted users to continue using the product, but it will no longer be visible to the public or available to new users.  | 

 For more information or support, contact the [AWS Marketplace Seller Operations team](https://aws.amazon.com/marketplace/management/contact-us/). 

# Machine learning product pricing for AWS Marketplace
<a name="machine-learning-pricing"></a>

You can choose from several available pricing models for your Amazon SageMaker AI products in AWS Marketplace. Buyers who subscribe to your product run it in SageMaker AI within their own AWS account. The price for your buyers is a combination of the infrastructure costs for the resources running in their AWS account and the product pricing that you set. The following sections provide information about pricing models for SageMaker AI products in AWS Marketplace

**Topics**
+ [

## Infrastructure pricing
](#ml-infrastructure-pricing)
+ [

## Software pricing
](#ml-software-pricing)

## Infrastructure pricing
<a name="ml-infrastructure-pricing"></a>

Buyers are responsible for all the infrastructure costs of SageMaker AI while using your product. These costs are set by AWS and are available on the [Amazon SageMaker AI pricing](https://aws.amazon.com/sagemaker/pricing/) page.

## Software pricing
<a name="ml-software-pricing"></a>

You determine the software prices that AWS Marketplace charges the buyer for using your product. You set the pricing and terms when you add your machine learning product to AWS Marketplace.

All infrastructure and software prices per instance type are presented to the buyer on the product listing pages in AWS Marketplace before the buyer subscribes.

**Topics**
+ [

### Free pricing
](#ml-pricing-free)
+ [

### Hourly pricing
](#ml-pricing-hourly)
+ [

### Inference pricing
](#ml-pricing-inference)
+ [

### Free trial
](#ml-pricing-free-trial)

### Free pricing
<a name="ml-pricing-free"></a>

You can choose to offer your product for free. In this case, the buyer only pays for infrastructure costs.

### Hourly pricing
<a name="ml-pricing-hourly"></a>

You can offer your product with a price per hour per instance of your software running in SageMaker AI. You can charge a different hourly price for each instance type that your software runs on. While a buyer runs your software, AWS Marketplace tracks usage and then bills the buyer accordingly. Usage is prorated to the minute.

For *model package* products, buyer can run your software in two different ways. They can host an endpoint continuously to perform real-time inference or run a batch transform job on a dataset. You can set different pricing for both of the ways a buyer can run your software.

For *algorithm* products, in addition to determining the prices for performing inference, as mentioned earlier, you also determine an hourly price for training jobs. You can charge a different hourly price for each instance type that your training image supports.

### Inference pricing
<a name="ml-pricing-inference"></a>

When the buyer runs your software by hosting an endpoint to continuously perform real-time inference, you can choose to set a price per inference.

**Note**  
The following ML product types always use hourly pricing:  
Batch transform jobs
Asynchronous inference endpoints
Training jobs for algorithm products
You set the price for each type independently of the inference pricing and of each other.

By default, with inference pricing, AWS Marketplace charges your buyer for each invocation of your endpoint. However, in some cases, your software processes a batch of inferences in a single invocation (also known as a *mini-batch*). For an endpoint deployment, you can indicate a custom number of inferences that AWS Marketplace should charge the buyer for that single invocation. To do this, include a custom metering header in the HTTP response headers of your invocation, as in the following example. This example shows an invocation that charges the buyer for three inferences.

```
X-Amzn-Inference-Metering: {"Dimension": "inference.count", "ConsumedUnits": 3}
```

**Note**  
For inference pricing, AWS Marketplace only charges the buyer for requests where the HTTP response code is `2XX`.

### Free trial
<a name="ml-pricing-free-trial"></a>

Optionally, you can create a free trial for your product and define the number of days of the free trial. Free trials can be 5–31 days. During the free trial, buyers can run your software as much as they want and aren't charged for your software. Buyers are charged for the infrastructure costs during the free trial. After the trial ends, they are charged your normal software price, along with the infrastructure costs.

When buyers subscribe to a product with a free trial, they receive a welcome email message. The message includes the term of the free trial, a calculated expiration date, and details on unsubscribing. A reminder email message is sent three days before the expiration date.

If you offer a free trial for your product in AWS Marketplace, you agree to the specific [refund policy](https://docs.aws.amazon.com/marketplace/latest/userguide/refunds.html#refund-policy) for free trials. 

**Note**  
For information on Private offers for machine learning, see [Private offers](https://docs.aws.amazon.com/marketplace/latest/userguide/private-offers-overview.html).

# Service restrictions and quotas for machine learning products in AWS Marketplace
<a name="ml-service-restrictions-and-limits"></a>

This section describes restrictions and quotas on your machine learning (ML) products in AWS Marketplace.

**Topics**
+ [

## Network isolation
](#ml-network-isolation)
+ [

## Image size
](#ml-image-size)
+ [

## Storage size
](#ml-storage-size)
+ [

## Instance size
](#ml-instance-size)
+ [

## Payload size for inference
](#ml-payload-size-for-inference)
+ [

## Processing time for inference
](#ml-processing-time-for-inference)
+ [

## Service quotas
](#ml-service-quotas)
+ [

## Serverless inference
](#severless-inference)
+ [

## Managed spot training
](#ml-managed-spot-training)
+ [

## Docker images and AWS accounts
](#ml-docker-images-and-aws-accounts)
+ [

## Publishing model packages from built-in algorithms or AWS Marketplace
](#ml-publishing-model-packages-from-built-in-algorithms-or-aws-marketplace)
+ [

## Supported AWS Regions for publishing
](#ml-supported-aws-regions-for-publishing)

## Network isolation
<a name="ml-network-isolation"></a>

For security purposes, when a buyer subscribes to your containerized product, the Docker containers are run in an isolated environment without network access. When you create your containers, don't rely on making outgoing calls over the internet because they will fail. Calls to AWS services will also fail. 

## Image size
<a name="ml-image-size"></a>

Your Docker image size is governed by the Amazon Elastic Container Registry (Amazon ECR) [service quotas](https://docs.aws.amazon.com/AmazonECR/latest/userguide/service_limits.html). The Docker image size affects the startup time during training jobs, batch-transform jobs, and endpoint creation. For better performance, maintain an optimal Docker image size. 

## Storage size
<a name="ml-storage-size"></a>

When you create an endpoint, Amazon SageMaker AI attaches an Amazon Elastic Block Store (Amazon EBS) storage volume to each ML compute instance that hosts the endpoint. (An endpoint is also known as *real-time inference* or *Amazon SageMaker AI hosting service*.) The size of the storage volume depends on the instance type. For more information, see [Host Instance Storage Volumes](https://docs.aws.amazon.com/sagemaker/latest/dg/host-instance-storage.html) in the *Amazon SageMaker AI Developer Guide*. 

For batch transform, see [Storage in Batch Transform](https://docs.aws.amazon.com/sagemaker/latest/dg/batch-transform-storage.html) in the *Amazon SageMaker AI Developer Guide*. 

## Instance size
<a name="ml-instance-size"></a>

SageMaker AI provides a selection of instance types that are optimized to fit different ML use cases. Instance types are comprised of varying combinations of CPU, GPU, memory, and networking capacity. Instance types give you the flexibility to choose the appropriate mix of resources for building, training, and deploying your ML models. For more information, see [Amazon SageMaker AI ML Instance Types](https://aws.amazon.com/sagemaker/pricing/instance-types/). 

## Payload size for inference
<a name="ml-payload-size-for-inference"></a>

 For an endpoint, limit the maximum size of the input data per invocation to 25 MB. This value can't be adjusted.

For batch transform, the maximum size of the input data per invocation is 100 MB. This value can't be adjusted.

## Processing time for inference
<a name="ml-processing-time-for-inference"></a>

For an endpoint, the maximum processing time per invocation is 60 seconds for regular responses and 8 min for streaming responses. This value can't be adjusted.

For batch transform, the maximum processing time per invocation is 60 minutes. This value can't be adjusted.

## Service quotas
<a name="ml-service-quotas"></a>

For more information about quotas related to training and inference, see [Amazon SageMaker AI Service Quotas](https://docs.aws.amazon.com/general/latest/gr/sagemaker.html#limits_sagemaker). 

## Serverless inference
<a name="severless-inference"></a>

Model packages and algorithms published in AWS Marketplace can't be deployed to endpoints configured for [Amazon SageMaker AI Serverless Inference](https://docs.aws.amazon.com/sagemaker/latest/dg/serverless-endpoints.html). Endpoints configured for serverless inference require models to have network connectivity. All AWS Marketplace models operate in network isolation. For more information, see [No network access](https://docs.aws.amazon.com/marketplace/latest/userguide/ml-security-and-intellectual-property.html#ml-no-network-access).

## Managed spot training
<a name="ml-managed-spot-training"></a>

For all algorithms from AWS Marketplace, the value of `MaxWaitTimeInSeconds` is set to 3,600 seconds (60 minutes), even if the checkpoint for [managed spot training](https://docs.aws.amazon.com/sagemaker/latest/dg/model-managed-spot-training.html) is implemented. This value can't be adjusted. 

## Docker images and AWS accounts
<a name="ml-docker-images-and-aws-accounts"></a>

For publishing, images must be stored in Amazon ECR repositories owned by the AWS account of the seller. It isn't possible to publish images that are stored in a repository owned by another AWS account. 

## Publishing model packages from built-in algorithms or AWS Marketplace
<a name="ml-publishing-model-packages-from-built-in-algorithms-or-aws-marketplace"></a>

Model packages created from training jobs using an [Amazon SageMaker AI built-in algorithm](https://docs.aws.amazon.com/sagemaker/latest/dg/algos.html) or an algorithm from an AWS Marketplace subscription can't be published. 

You can still use the model artifacts from the training job, but your own inference image is required for publishing model packages. 

## Supported AWS Regions for publishing
<a name="ml-supported-aws-regions-for-publishing"></a>

AWS Marketplace supports publishing model package and algorithm resources from AWS Regions where the following are both true: 
+ A Region that [ Amazon SageMaker AI supports](https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/) 
+ An [available Region](https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/) that is opted-in by default (for example, [ describe-regions](https://docs.aws.amazon.com/general/latest/gr/rande-manage.html#ec2-describe-regions) returns `"OptInStatus": "opt-in-not-required"`) 

All assets required for publishing a model package or algorithm product must be stored in the same Region that you choose to publish from. This includes the following: 
+ Model package and algorithm resources that are created in Amazon SageMaker AI 
+ Inference and training images that are uploaded to Amazon ECR repositories 
+ Model artifacts (if any) that are stored in Amazon Simple Storage Service and dynamically loaded during model deployment for model package resources 
+ Test data for inference and training validation that are stored in Amazon S3 

You can develop and train your product in any Region that is supported by SageMaker AI. But, before you can publish, you must copy all assets to and re-create resources in a Region that AWS Marketplace supports publishing from. 

# Security and intellectual property with Amazon SageMaker AI
<a name="ml-security-and-intellectual-property"></a>

Amazon SageMaker AI protects both your intellectual property and buyer data for models and algorithms obtained from AWS Marketplace. The following sections provide more information about the ways that SageMaker AI protects intellectual property and the security of customer data.

**Topics**
+ [

## Protecting intellectual property
](#ml-protecting-intellectual-property)
+ [

## No network access
](#ml-no-network-access)
+ [

## Security of customer data
](#ml-security-of-customer-data)

## Protecting intellectual property
<a name="ml-protecting-intellectual-property"></a>

 When you create a product, the code is packaged in Docker container images. For more information, see [Preparing your product in SageMaker AI](ml-prepare-your-product-in-sagemaker.md), later in this guide. When you upload a container image, the image and artifacts are encrypted in transit and at rest. The images are also scanned for vulnerabilities before being published. 

 To help safeguard your intellectual property, SageMaker AI allows only buyers to access your product through AWS service endpoints. Buyers cannot directly access or pull container images or model artifacts, nor can they access the underlying infrastructure. 

## No network access
<a name="ml-no-network-access"></a>

 Unlike SageMaker AI models and algorithms that buyers create, when buyers launch your product from AWS Marketplace, the models and algorithms are deployed without network access. SageMaker AI deploys images in an environment with no access to the network or AWS service endpoints. For example, a container image can't make outbound API calls to services on the internet, [VPC endpoints](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-endpoints.html), or any other AWS services. 

## Security of customer data
<a name="ml-security-of-customer-data"></a>

 Your product runs in SageMaker AI within the buyer’s AWS account. So, when a buyer uses your product to perform data inference, you as the seller can't access their data. 

 For algorithm products, model artifacts are outputted by your training image after each training job. Model artifacts are stored in the buyer’s account. The model artifacts from the training job are used when the buyer deploys the model with your inference image. To protect any intellectual property that may be contained in the model artifact, encrypt them before outputting them. 

**Important**  
 This security model prevents your code from accessing the internet during runtime. Therefore, your code can't use resources or libraries from the internet, so package your dependencies in the Docker container image. This is especially important if you choose to encrypt your outputted artifacts from the training job. The keys to encrypt and decrypt artifacts can't be accessed over the internet at runtime. They must be packaged with your image. 

 For more information, see [Security in Amazon SageMaker AI](https://docs.aws.amazon.com/sagemaker/latest/dg/security.html). 

# Machine learning reports in AWS Marketplace
<a name="ml-reporting"></a>

AWS Marketplace produces reports for your Amazon SageMaker AI products that include data about buyers, financials, usage, and taxes. All reports are available in the AWS Marketplace Management Portal on the [Reports page](https://aws.amazon.com/marketplace/management/reports). For more information, see [Seller Reports](https://docs.aws.amazon.com/marketplace/latest/userguide/Reporting.html). The following sections provide summary information about reports for machine learning products.

**Topics**
+ [

## Daily business report
](#ml-daily-business-report)
+ [

## Monthly revenue report
](#ml-monthly-revenue-report)
+ [

## Disbursement report
](#ml-disbursement-report)
+ [

## Other reports and analysis
](#ml-other-reports)

## Daily business report
<a name="ml-daily-business-report"></a>

 The daily business report provides the instance type, hours of usage, revenue from software charges, and other details for each buyer and product. Buyers can be identified by their AWS account ID. For more information, see [Daily business report](https://docs.aws.amazon.com/marketplace/latest/userguide/daily-business-report.html). 

## Monthly revenue report
<a name="ml-monthly-revenue-report"></a>

 The monthly revenue report provides you with the monthly revenue that has been billed to your buyers for using your software. For more information, see [Monthly billed revenue report](https://docs.aws.amazon.com/marketplace/latest/userguide/monthly-billed-revenue-report.html). 

## Disbursement report
<a name="ml-disbursement-report"></a>

 The monthly disbursement report provides a breakdown of all funds collected on your behalf during the settlement period for your software charges. The total settlement amount reflected in the report should match the amount deposited to your bank account. For more information, see [Disbursement report](https://docs.aws.amazon.com/marketplace/latest/userguide/monthly-disbursement-report.html). 

## Other reports and analysis
<a name="ml-other-reports"></a>

 For other available reports, see [Seller reports](https://docs.aws.amazon.com/marketplace/latest/userguide/dashboards.html). 

You can also create custom reports using the available [Seller delivery data feeds in AWS Marketplace](data-feed-service.md) from AWS Marketplace.

# Preparing your product in SageMaker AI
<a name="ml-prepare-your-product-in-sagemaker"></a>

Before you can publish your product in AWS Marketplace, you must prepare it in Amazon SageMaker AI. There are two types of SageMaker AI products listed in AWS Marketplace: model packages and algorithms. For more information, see [Machine learning products in AWS Marketplace](machine-learning-products.md). This topic provides an overview of the three steps that are required to prepare your product:

1. [Packaging your code into images for machine learning products in AWS Marketplace](ml-packaging-your-code-into-images.md) – To prepare a model package or algorithm product, you must create the Docker container images for your product. 

1. [Uploading your images to Amazon Elastic Container Registry](ml-uploading-your-images.md) – After packaging your code in container images and testing them locally, upload the images and scan them for known vulnerabilities. Fix any vulnerabilities before continuing. 

1.  [Creating your Amazon SageMaker AI resource](ml-creating-your-amazon-sagemaker-resource.md) – After your images are scanned successfully, you can use them to create a model package or algorithm resource in SageMaker AI.

# Packaging your code into images for machine learning products in AWS Marketplace
<a name="ml-packaging-your-code-into-images"></a>

Machine learning products in AWS Marketplace use Amazon SageMaker AI to create and run the machine learning logic you provide for buyers. SageMaker AI runs Docker container images that contain your logic. SageMaker AI runs these containers in a secure and scalable infrastructure. For more information, see [Security and intellectual property with Amazon SageMaker AI](ml-security-and-intellectual-property.md). The following sections provide information about how to package your code into Docker container images for SageMaker AI.

**Topics**
+ [

## Which type of container image do I create?
](#ml-which-type-of-container-image-do-i-create)
+ [

# Creating model package images
](ml-model-package-images.md)
+ [

# Creating algorithm images
](ml-algorithm-images.md)

## Which type of container image do I create?
<a name="ml-which-type-of-container-image-do-i-create"></a>

 The two types of container images are an inference image and a training image. 

 To create a model package product, you need only an inference image. For detailed instructions, see [Creating model package images](ml-model-package-images.md). 

 To create an algorithm product, you need both training and inference images. For detailed instructions, see [Creating algorithm images](ml-algorithm-images.md). 

 To package code properly into a container image, the container must adhere to the SageMaker AI file structure. The container must expose the correct endpoints to ensure that the service can pass data to and from your container. The following sections explain the details of this process. 

**Important**  
 For security purposes, when a buyer subscribes to your containerized product, the Docker containers run in an isolated environment without an internet connection. When you create your containers, don't rely on outgoing calls over the internet because they will fail. Calls to AWS services will also fail. For more information, see the [Security and intellectual property with Amazon SageMaker AI](ml-security-and-intellectual-property.md) section. 

 Optionally, when creating your inference and training images, use a container from [Available Deep Learning Containers Images](https://aws.amazon.com/releasenotes/available-deep-learning-containers-images/) as a starting point. The images are already properly packaged with different machine learning frameworks. 

# Creating model package images
<a name="ml-model-package-images"></a>

An Amazon SageMaker AI model package is a pre-trained model that makes predictions and does not require any further training by the buyer. You can create a model package in SageMaker AI and publish your machine learning product on AWS Marketplace. The following sections you how to create a model package for AWS Marketplace. This includes creating the container image and building and testing the image locally.

**Topics**
+ [

## Overview
](#ml-model-package-images-overview)
+ [

## Create an inference image for model packages
](#ml-creating-an-inference-image-for-model-packages)

## Overview
<a name="ml-model-package-images-overview"></a>

 A model package includes the following components: 
+  An inference image stored in [Amazon Elastic Container Registry](https://aws.amazon.com/ecr/) (Amazon ECR) 
+  (Optional) Model artifacts, stored separately in [Amazon S3](https://aws.amazon.com/s3/) 

**Note**  
Model artifacts are files your model uses to make predictions and are generally the result of your own training processes. Artifacts can be any file type that is needed by your model but must use.tar.gz compression. For model packages, they can either be bundled within your inference image or stored separately in Amazon SageMaker AI. The model artifacts stored in Amazon S3 are loaded into the inference container at runtime. When publishing your model package, those artifacts are published and stored in AWS Marketplace owned Amazon S3 buckets that are inaccessible by the buyer directly. 

**Tip**  
If your inference model is built with a deep learning framework such as Gluon, Keras, MXNet, PyTorch, TensorFlow, TensorFlow-Lite, or ONNX, consider using Amazon SageMaker AI Neo. Neo can automatically optimize inference models that deploy to a specific family of cloud instance types such as `ml.c4`, `ml.p2`, and others. For more information, see [Optimize model performance using Neo](https://docs.aws.amazon.com/sagemaker/latest/dg/neo.html) in the *Amazon SageMaker AI Developer Guide*.

The following diagram shows the workflow for publishing and using model package products. 

![\[Diagram of how a seller creates a model package image and how a buyer uses it.\]](http://docs.aws.amazon.com/marketplace/latest/userguide/images/ml-model-package-images.png)


The workflow for creating a SageMaker AI model package for AWS Marketplace includes the following steps:

1. The seller creates an inference image (no network access when deployed) and pushes it to the Amazon ECR Registry. 

   The model artifacts can either be bundled in the inference image or stored separately in S3.

1. The seller then creates a model package resource in Amazon SageMaker AI and publishes their ML product on AWS Marketplace.

1. The buyer subscribes to the ML product and deploys the model. 
**Note**  
 The model can be deployed as an endpoint for real-time inferences or as a batch job to get predictions for an entire dataset at once. For more information, see [Deploy Models for Inference](https://docs.aws.amazon.com/sagemaker/latest/dg/deploy-model.html). 

1. SageMaker AI runs the inference image. Any seller-provided model artifacts not bundled in the inference image are loaded dynamically at runtime. 

1.  SageMaker AI passes the buyer’s inference data to the container by using the container’s HTTP endpoints and returns the prediction results. 

## Create an inference image for model packages
<a name="ml-creating-an-inference-image-for-model-packages"></a>

This section provides a walkthrough for packaging your inference code into an inference image for your model package product. The process consists of the following steps:

**Topics**
+ [

### Step 1: Create the container image
](#ml-step-1-creating-the-container-image)
+ [

### Step 2: Build and testing the image locally
](#ml-step-2-building-and-testing-the-image-locally)

The inference image is a Docker image containing your inference logic. The container at runtime exposes HTTP endpoints to allow SageMaker AI to pass data to and from your container. 

**Note**  
 The following is only one example of packaging code for an inference image. For more information, see [Using Docker containers with SageMaker AI](https://docs.aws.amazon.com/sagemaker/latest/dg/your-algorithms.html) and the [AWS Marketplace SageMaker AI examples](https://github.com/aws/amazon-sagemaker-examples/tree/master/aws_marketplace) on GitHub.  
The following example uses a web service, [Flask](https://pypi.org/project/Flask/), for simplicity, and is not considered production-ready.

### Step 1: Create the container image
<a name="ml-step-1-creating-the-container-image"></a>

 For the inference image to be compatible with SageMaker AI, the Docker image must expose HTTP endpoints. While your container is running, SageMaker AI passes buyer inputs for inference to the container’s HTTP endpoint. The inference results are returned in the body of the HTTP response. 

 The following walkthrough uses the Docker CLI in a development environment using a Linux Ubuntu distribution. 
+ [Create the web server script](#ml-create-the-web-server-script)
+ [Create the script for the container run](#ml-create-the-script-for-the-container-run)
+ [Create the `Dockerfile`](#ml-create-the-dockerfile)
+ [Package or upload the model artifacts](#ml-package-or-upload-the-model-artifacts)

#### Create the web server script
<a name="ml-create-the-web-server-script"></a>

 This example uses a Python server called [Flask](https://pypi.org/project/Flask/), but you can use any web server that works for your framework. 

**Note**  
[Flask](https://pypi.org/project/Flask/) is used here for simplicity. It is not considered a production-ready web server.

 Create a Flask web server script that serves the two HTTP endpoints on TCP port 8080 that SageMaker AI uses. The following are the two expected endpoints: 
+  `/ping` – SageMaker AI makes HTTP GET requests to this endpoint to check if your container is ready. When your container is ready, it responds to HTTP GET requests at this endpoint with an HTTP 200 response code. 
+  `/invocations` – SageMaker AI makes HTTP POST requests to this endpoint for inference. The input data for inference is sent in the body of the request. The user-specified content type is passed in the HTTP header. The body of the response is the inference output. For details about timeouts, see [Requirements and best practices for creating machine learning products](ml-listing-requirements-and-best-practices.md). 

 **`./web_app_serve.py`** 

```
# Import modules
import json
import re
from flask import Flask
from flask import request
app = Flask(__name__)

# Create a path for health checks
@app.route("/ping")
def endpoint_ping():
  return ""
 
# Create a path for inference
@app.route("/invocations", methods=["POST"])
def endpoint_invocations():
  
  # Read the input
  input_str = request.get_data().decode("utf8")
  
  # Add your inference code between these comments.
  #
  #
  #
  #
  #
  # Add your inference code above this comment.
  
  # Return a response with a prediction
  response = {"prediction":"a","text":input_str}
  return json.dumps(response)
```

In the previous example, there is no actual inference logic. For your actual inference image, add the inference logic into the web app so it processes the input and returns the actual prediction.

Your inference image must contain all of its required dependencies because it will not have internet access, nor will it be able to make calls to any AWS services.

**Note**  
This same code is called for both real-time and batch inferences

#### Create the script for the container run
<a name="ml-create-the-script-for-the-container-run"></a>

 Create a script named `serve` that SageMaker AI runs when it runs the Docker container image. The following script starts the HTTP web server. 

 **`./serve`** 

```
#!/bin/bash

# Run flask server on port 8080 for SageMaker
flask run --host 0.0.0.0 --port 8080
```

#### Create the `Dockerfile`
<a name="ml-create-the-dockerfile"></a>

 Create a `Dockerfile` in your build context. This example uses Ubuntu 18.04, but you can start from any base image that works for your framework. 

 `./Dockerfile` 

```
FROM ubuntu:18.04

# Specify encoding
ENV LC_ALL=C.UTF-8
ENV LANG=C.UTF-8

# Install python-pip
RUN apt-get update \
&& apt-get install -y python3.6 python3-pip \
&& ln -s /usr/bin/python3.6 /usr/bin/python \
&& ln -s /usr/bin/pip3 /usr/bin/pip;

# Install flask server
RUN pip install -U Flask;

# Add a web server script to the image
# Set an environment to tell flask the script to run
COPY /web_app_serve.py /web_app_serve.py
ENV FLASK_APP=/web_app_serve.py

# Add a script that Amazon SageMaker AI will run
# Set run permissions
# Prepend program directory to $PATH
COPY /serve /opt/program/serve
RUN chmod 755 /opt/program/serve
ENV PATH=/opt/program:${PATH}
```

 The `Dockerfile` adds the two previously created scripts to the image. The directory of the `serve` script is added to the PATH so it can run when the container runs. 

#### Package or upload the model artifacts
<a name="ml-package-or-upload-the-model-artifacts"></a>

 The two ways to provide the model artifacts from training the model to the inference image are as follows: 
+  Packaged statically with the inference image. 
+  Loaded dynamically at runtime. Because it's loaded dynamically, you can use the same image for packaging different machine learning models.

 If you want to package your model artifacts with the inference image, include the artifacts in the `Dockerfile`. 

 If you want to load your model artifacts dynamically, store those artifacts separately in a compressed file (.tar.gz) in Amazon S3. When creating the model package, specify the location of the compressed file, and SageMaker AI extracts and copies the contents to the container directory `/opt/ml/model/` when running your container. When publishing your model package, those artifacts are published and stored in AWS Marketplace owned Amazon S3 buckets that are inaccessible by the buyer directly. 

### Step 2: Build and testing the image locally
<a name="ml-step-2-building-and-testing-the-image-locally"></a>

 In the build context, the following files now exist: 
+  `./Dockerfile` 
+  `./web_app_serve.py` 
+  `./serve` 
+  Your inference logic and (optional) dependencies 

 Next build, run, and test the container image. 

#### Build the image
<a name="ml-build-the-image"></a>

 Run the Docker command in the build context to build and tag the image. This example uses the tag `my-inference-image`. 

```
sudo docker build --tag my-inference-image ./
```

 After running this Docker command to build the image, you should see output as Docker builds the image based on each line in your `Dockerfile`. When it finishes, you should see something similar to the following. 

```
Successfully built abcdef123456
Successfully tagged my-inference-image:latest
```

#### Run locally
<a name="ml-run-locally"></a>

 After your build has completed, you can test the image locally. 

```
sudo docker run \
  --rm \
  --publish 8080:8080/tcp \
  --detach \
  --name my-inference-container \
  my-inference-image \
  serve
```

 The following are details about the command: 
+ `--rm` – Automatically remove the container after it stops.
+ `--publish 8080:8080/tcp` – Expose port 8080 to simulate the port that SageMaker AI sends HTTP requests to.
+ `--detach` – Run the container in the background.
+ `--name my-inference-container` – Give this running container a name.
+ `my-inference-image` – Run the built image.
+ `serve` – Run the same script that SageMaker AI runs when running the container.

 After running this command, Docker creates a container from the inference image you built and runs it in the background. The container runs the `serve` script, which launches your web server for testing purposes. 

#### Test the ping HTTP endpoint
<a name="ml-test-the-ping-http-endpoint"></a>

 When SageMaker AI runs your container, it periodically pings the endpoint. When the endpoint returns an HTTP response with status code 200, it signals to SageMaker AI that the container is ready for inference. You can test this by running the following command, which tests the endpoint and includes the response header. 

```
curl --include http://127.0.0.1:8080/ping
```

Example output is as follows.

```
HTTP/1.0 200 OK
Content-Type: text/html; charset=utf-8
Content-Length: 0
Server: MyServer/0.16.0 Python/3.6.8
Date: Mon, 21 Oct 2019 06:58:54 GMT
```

#### Test the inference HTTP endpoint
<a name="ml-test-the-inference-http-endpoint"></a>

 When the container indicates it is ready by returning a 200 status code to your ping, SageMaker AI passes the inference data to the `/invocations` HTTP endpoint via a `POST` request. Test the inference point by running the following command. 

```
curl \
  --request POST \
  --data "hello world" \
  http://127.0.0.1:8080/invocations
```

 Example output is as follows. 

 `{"prediction": "a", "text": "hello world"}` 

 With these two HTTP endpoints working, the inference image is now compatible with SageMaker AI. 

**Note**  
 The model of your model package product can be deployed in two ways: real time and batch. In both deployments, SageMaker AI uses the same HTTP endpoints while running the Docker container. 

 To stop the container, run the following command.

```
sudo docker container stop my-inference-container
```

 When your inference image is ready and tested, you can continue to [Uploading your images to Amazon Elastic Container Registry](ml-uploading-your-images.md). 

# Creating algorithm images
<a name="ml-algorithm-images"></a>

An Amazon SageMaker AI algorithm requires that the buyer bring their own data to train before it makes predictions. As an AWS Marketplace seller, you can use SageMaker AI to create machine learning (ML) algorithms and models that your buyers can deploy in AWS. The following sections you how to create algorithm images for AWS Marketplace. This includes creating the Docker training image for training your algorithm and the inference image that contains your inference logic. Both the training and inference images are required when publishing an algorithm product.

**Topics**
+ [

## Overview
](#ml-algorithm-images-overview)
+ [

## Creating a training image for algorithms
](#ml-creating-a-training-image-for-algorithms)
+ [

## Creating an inference image for algorithms
](#ml-creating-an-inference-image-for-algorithms)

## Overview
<a name="ml-algorithm-images-overview"></a>

An algorithm includes the following components: 
+  A training image stored in [Amazon ECR](https://aws.amazon.com/ecr/) 
+  An inference image stored in Amazon Elastic Container Registry (Amazon ECR) 

**Note**  
 For algorithm products, the training container generates model artifacts that are loaded into the inference container on model deployment. 

The following diagram shows the workflow for publishing and using algorithm products.

![\[Diagram of how a seller creates an algorithm package image and how a buyer uses it.\]](http://docs.aws.amazon.com/marketplace/latest/userguide/images/ml-algorithm-package-images.png)


The workflow for creating a SageMaker AI algorithm for AWS Marketplace includes the following steps:

1. The seller creates a training image and an inference image (no network access when deployed) and uploads it to the Amazon ECR Registry. 

1. The seller then creates an algorithm resource in Amazon SageMaker AI and publishes their ML product on AWS Marketplace.

1. The buyer subscribes to the ML product. 

1. The buyer creates a training job with a compatible dataset and appropriate hyperparameter values. SageMaker AI runs the training image and loads the training data and hyperparameters into the training container. When the training job completes, the model artifacts located in `/opt/ml/model/` are compressed and copied to the buyer’s [Amazon S3](https://aws.amazon.com/s3/) bucket. 

1. The buyer creates a model package with the model artifacts from the training stored in Amazon S3 and deploys the model. 

1. SageMaker AI runs the inference image, extracts the compressed model artifacts, and loads the files into the inference container directory path `/opt/ml/model/` where it is consumed by the code that serves the inference. 

1.  Whether the model deploys as an endpoint or a batch transform job, SageMaker AI passes the data for inference on behalf of the buyer to the container via the container’s HTTP endpoint and returns the prediction results. 

**Note**  
 For more information, see [Train Models](https://docs.aws.amazon.com/sagemaker/latest/dg/train-model.html). 

## Creating a training image for algorithms
<a name="ml-creating-a-training-image-for-algorithms"></a>

 This section provides a walkthrough for packaging your training code into a training image. A training image is required to create an algorithm product. 

 A *training image* is a Docker image containing your training algorithm. The container adheres to a specific file structure to allow SageMaker AI to copy data to and from your container. 

 Both the training and inference images are required when publishing an algorithm product. After creating your training image, you must create an inference image. The two images can be combined into one image or remain as separate images. Whether to combine the images or separate them is up to you. Typically, inference is simpler than training, and you might want separate images to help with inference performance.

**Note**  
 The following is only one example of packaging code for a training image. For more information, see [Use your own algorithms and models with the AWS Marketplace](https://docs.aws.amazon.com/sagemaker/latest/dg/your-algorithms-marketplace.html) and the [AWS Marketplace SageMaker AI examples](https://github.com/aws/amazon-sagemaker-examples/tree/master/aws_marketplace) on GitHub.

**Topics**
+ [

### Step 1: Creating the container image
](#ml-step-1-creating-the-container-image-1)
+ [

### Step 2: Building and testing the image locally
](#ml-step-2-building-and-testing-the-image-locally-1)

### Step 1: Creating the container image
<a name="ml-step-1-creating-the-container-image-1"></a>

 For the training image to be compatible with Amazon SageMaker AI, it must adhere to a specific file structure to allow SageMaker AI to copy the training data and configuration inputs to specific paths in your container. When the training completes, the generated model artifacts are stored in a specific directory path in the container where SageMaker AI copies from. 

 The following uses Docker CLI installed in a development environment on an Ubuntu distribution of Linux. 
+ [Prepare your program to read configuration inputs](#ml-prepare-your-program-to-read-configuration-inputs)
+ [Prepare your program to read data inputs](#ml-prepare-your-program-to-read-data-inputs)
+ [Prepare your program to write training outputs](#ml-prepare-your-program-to-write-training-outputs)
+ [Create the script for the container run](#ml-create-the-script-for-the-container-run-1)
+ [Create the `Dockerfile`](#ml-create-the-dockerfile-1)

#### Prepare your program to read configuration inputs
<a name="ml-prepare-your-program-to-read-configuration-inputs"></a>

 If your training program requires any buyer-provided configuration input, the following is where those are copied to inside your container when ran. If required, your program must read from those specific file paths. 
+  `/opt/ml/input/config` is the directory that contains information which controls how your program runs. 
  +  `hyperparameters.json` is a JSON-formatted dictionary of hyperparameter names and values. The values are strings, so you may need to convert them. 
  +  `resourceConfig.json` is a JSON-formatted file that describes the network layout used for [ distributed training](https://docs.aws.amazon.com/sagemaker/latest/dg/your-algorithms-training-algo-running-container.html#your-algorithms-training-algo-running-container-dist-training). If your training image does not support distributed training, you can ignore this file. 

**Note**  
 For more information about configuration inputs, see [ How Amazon SageMaker AI Provides Training Information](https://docs.aws.amazon.com/sagemaker/latest/dg/your-algorithms-training-algo-running-container.html). 

#### Prepare your program to read data inputs
<a name="ml-prepare-your-program-to-read-data-inputs"></a>

 Training data can be passed to the container in one of the following two modes. Your training program that runs in the container digests the training data in one of those two modes. 

 **File mode** 
+  `/opt/ml/input/data/<channel_name>/` contains the input data for that channel. The channels are created based on the call to the `CreateTrainingJob` operation, but it's generally important that channels match what the algorithm expects. The files for each channel are copied from [Amazon S3](https://aws.amazon.com/s3/) to this directory, preserving the tree structure indicated by the Amazon S3 key structure. 

 **Pipe mode** 
+  `/opt/ml/input/data/<channel_name>_<epoch_number>` is the pipe for a given epoch. Epochs start at zero and increase by one each time you read them. There is no limit to the number of epochs that you can run, but you must close each pipe before reading the next epoch. 

#### Prepare your program to write training outputs
<a name="ml-prepare-your-program-to-write-training-outputs"></a>

 The output of the training is written to the following container directories: 
+  `/opt/ml/model/` is the directory where you write the model or the model artifacts that your training algorithm generates. Your model can be in any format that you want. It can be a single file or a whole directory tree. SageMaker AI packages any files in this directory into a compressed file (.tar.gz). This file is available at the Amazon S3 location returned by the `DescribeTrainingJob` API operation. 
+  `/opt/ml/output/` is a directory where the algorithm can write a `failure` file that describes why the job failed. The contents of this file are returned in the `FailureReason` field of the `DescribeTrainingJob` result. For jobs that succeed, there is no reason to write this file because it’s ignored. 

#### Create the script for the container run
<a name="ml-create-the-script-for-the-container-run-1"></a>

 Create a `train` shell script that SageMaker AI runs when it runs the Docker container image. When the training completes and the model artifacts are written to their respective directories, exit the script. 

 **`./train`** 

```
#!/bin/bash

# Run your training program here
#
#
#
#
```

#### Create the `Dockerfile`
<a name="ml-create-the-dockerfile-1"></a>

 Create a `Dockerfile` in your build context. This example uses Ubuntu 18.04 as the base image, but you can start from any base image that works for your framework. 

 **`./Dockerfile`** 

```
FROM ubuntu:18.04

# Add training dependencies and programs
#
#
#
#
#
# Add a script that SageMaker AI will run
# Set run permissions
# Prepend program directory to $PATH
COPY /train /opt/program/train
RUN chmod 755 /opt/program/train
ENV PATH=/opt/program:${PATH}
```

 The `Dockerfile` adds the previously created `train` script to the image. The script’s directory is added to the PATH so it can run when the container runs. 

 In the previous example, there is no actual training logic. For your actual training image, add the training dependencies to the `Dockerfile`, and add the logic to read the training inputs to train and generate the model artifacts. 

 Your training image must contain all of its required dependencies because it will not have internet access. 

 For more information, see [Use your own algorithms and models with the AWS Marketplace](https://docs.aws.amazon.com/sagemaker/latest/dg/your-algorithms-marketplace.html) and the [AWS Marketplace SageMaker AI examples](https://github.com/aws/amazon-sagemaker-examples/tree/master/aws_marketplace) on GitHub.

### Step 2: Building and testing the image locally
<a name="ml-step-2-building-and-testing-the-image-locally-1"></a>

 In the build context, the following files now exist: 
+ `./Dockerfile`
+ `./train`
+ Your training dependencies and logic

 Next you can build, run, and test this container image. 

#### Build the image
<a name="ml-build-the-image-1"></a>

 Run the Docker command in the build context to build and tag the image. This example uses the tag `my-training-image`. 

```
sudo docker build --tag my-training-image ./
```

 After running this Docker command to build the image, you should see output as Docker builds the image based on each line in your `Dockerfile`. When it finishes, you should see something similar to the following. 

```
Successfully built abcdef123456
Successfully tagged my-training-image:latest
```

#### Run locally
<a name="ml-run-locally-1"></a>

 After that has completed, test the image locally as shown in the following example. 

```
sudo docker run \
  --rm \
  --volume '<path_to_input>:/opt/ml/input:ro' \
  --volume '<path_to_model>:/opt/ml/model' \
  --volume '<path_to_output>:/opt/ml/output' \
  --name my-training-container \
  my-training-image \
  train
```

 The following are command details: 
+  `--rm` – Automatically remove the container after it stops. 
+  `--volume '<path_to_input>:/opt/ml/input:ro'` – Make test input directory available to container as read-only. 
+  `--volume '<path_to_model>:/opt/ml/model'` – Bind mount the path where the model artifacts are stored on the host machine when the training test is complete. 
+  `--volume '<path_to_output>:/opt/ml/output'` – Bind mount the path where the failure reason in a `failure` file is written to on the host machine. 
+  `--name my-training-container` – Give this running container a name. 
+  `my-training-image` – Run the built image. 
+  `train` – Run the same script SageMaker AI runs when running the container. 

 After running this command, Docker creates a container from the training image you built and runs it. The container runs the `train` script, which starts your training program. 

 After your training program finishes and the container exits, check that the output model artifacts are correct. Additionally, check the log outputs to confirm that they are not producing logs that you do not want, while ensuring enough information is provided about the training job. 

 This completes packaging your training code for an algorithm product. Because an algorithm product also includes an inference image, continue to the next section, [Creating an inference image for algorithms](#ml-creating-an-inference-image-for-algorithms). 

## Creating an inference image for algorithms
<a name="ml-creating-an-inference-image-for-algorithms"></a>

 This section provides a walkthrough for packaging your inference code into an inference image for your algorithm product. 

 The inference image is a Docker image containing your inference logic. The container at runtime exposes HTTP endpoints to allow SageMaker AI to pass data to and from your container. 

 Both the training and inference images are required when publishing an algorithm product. If you have not already done so, see the previous section about [Creating a training image for algorithms](#ml-creating-a-training-image-for-algorithms). The two images can be combined into one image or remain as separate images. Whether to combine the images or separate them is up to you. Typically, inference is simpler than training, and you might want separate images to help with inference performance.

**Note**  
 The following is only one example of packaging code for an inference image. For more information, see [ Use your own algorithms and models with the AWS Marketplace](https://docs.aws.amazon.com/sagemaker/latest/dg/your-algorithms-marketplace.html) and the [AWS Marketplace SageMaker AI examples](https://github.com/aws/amazon-sagemaker-examples/tree/master/aws_marketplace) on GitHub.  
The following example uses a web service, [Flask](https://pypi.org/project/Flask/), for simplicity, and is not considered production-ready.

**Topics**
+ [

### Step 1: Creating the inference image
](#ml-step-1-creating-the-inference-image)
+ [

### Step 2: Building and testing the image locally
](#ml-step-2-building-and-testing-the-image-locally-2)

### Step 1: Creating the inference image
<a name="ml-step-1-creating-the-inference-image"></a>

 For the inference image to be compatible with SageMaker AI, the Docker image must expose HTTP endpoints. While your container is running, SageMaker AI passes inputs for inference provided by the buyer to your container’s HTTP endpoint. The result of the inference is returned in the body of the HTTP response. 

 The following uses Docker CLI installed in a development environment on an Ubuntu distribution of Linux. 
+ [Create the web server script](#ml-create-the-web-server-script-1) 
+ [Create the script for the container run](#ml-create-the-script-for-the-container-run-2)
+ [Create the `Dockerfile`](#ml-create-the-dockerfile-2)
+ [Preparing your program to dynamically load model artifacts](#ml-preparing-your-program-to-dynamically-load-model-artifacts)

#### Create the web server script
<a name="ml-create-the-web-server-script-1"></a>

 This example uses a Python server called [Flask](https://pypi.org/project/Flask/), but you can use any web server that works for your framework. 

**Note**  
[Flask](https://pypi.org/project/Flask/) is used here for simplicity. It is not considered a production-ready web server.

 Create the Flask web server script that serves the two HTTP endpoints on TCP port 8080 that SageMaker AI uses. The following are the two expected endpoints: 
+  `/ping` – SageMaker AI makes HTTP GET requests to this endpoint to check if your container is ready. When your container is ready, it responds to HTTP GET requests at this endpoint with an HTTP 200 response code. 
+  `/invocations` – SageMaker AI makes HTTP POST requests to this endpoint for inference. The input data for inference is sent in the body of the request. The user-specified content type is passed in the HTTP header. The body of the response is the inference output. 

 **`./web_app_serve.py`** 

```
# Import modules
import json
import re
from flask import Flask
from flask import request
app = Flask(__name__)

# Create a path for health checks
@app.route("/ping")
def endpoint_ping():
  return ""
 
# Create a path for inference
@app.route("/invocations", methods=["POST"])
def endpoint_invocations():
  
  # Read the input
  input_str = request.get_data().decode("utf8")
  
  # Add your inference code here.
  #
  #
  #
  #
  #
  # Add your inference code here.
  
  # Return a response with a prediction
  response = {"prediction":"a","text":input_str}
  return json.dumps(response)
```

 In the previous example, there is no actual inference logic. For your actual inference image, add the inference logic into the web app so it processes the input and returns the prediction. 

 Your inference image must contain all of its required dependencies because it will not have internet access. 

#### Create the script for the container run
<a name="ml-create-the-script-for-the-container-run-2"></a>

 Create a script named `serve` that SageMaker AI runs when it runs the Docker container image. In this script, start the HTTP web server. 

 **`./serve`** 

```
#!/bin/bash

# Run flask server on port 8080 for SageMaker AI
flask run --host 0.0.0.0 --port 8080
```

#### Create the `Dockerfile`
<a name="ml-create-the-dockerfile-2"></a>

 Create a `Dockerfile` in your build context. This example uses Ubuntu 18.04, but you can start from any base image that works for your framework. 

 **`./Dockerfile`** 

```
FROM ubuntu:18.04

# Specify encoding
ENV LC_ALL=C.UTF-8
ENV LANG=C.UTF-8

# Install python-pip
RUN apt-get update \
&& apt-get install -y python3.6 python3-pip \
&& ln -s /usr/bin/python3.6 /usr/bin/python \
&& ln -s /usr/bin/pip3 /usr/bin/pip;

# Install flask server
RUN pip install -U Flask;

# Add a web server script to the image
# Set an environment to tell flask the script to run
COPY /web_app_serve.py /web_app_serve.py
ENV FLASK_APP=/web_app_serve.py

# Add a script that Amazon SageMaker AI will run
# Set run permissions
# Prepend program directory to $PATH
COPY /serve /opt/program/serve
RUN chmod 755 /opt/program/serve
ENV PATH=/opt/program:${PATH}
```

 The `Dockerfile` adds the two created previously scripts to the image. The directory of the `serve` script is added to the PATH so it can run when the container runs. 

#### Preparing your program to dynamically load model artifacts
<a name="ml-preparing-your-program-to-dynamically-load-model-artifacts"></a>

 For algorithm products, the buyer uses their own datasets with your training image to generate unique model artifacts. When the training process completes, your training container outputs model artifacts to the container directory` /opt/ml/model/`. SageMaker AI compresses the contents in that directory into a .tar.gz file and stores it in the buyer’s AWS account in Amazon S3.

 When the model deploys, SageMaker AI runs your inference image, extracts the model artifacts from the .tar.gz file stored in the buyer’s account in Amazon S3,and loads them into the inference container in the `/opt/ml/model/` directory. At runtime, your inference container code uses the model data. 

**Note**  
 To protect any intellectual property that might be contained in the model artifact files, you can choose to encrypt the files before outputting them. For more information, see [Security and intellectual property with Amazon SageMaker AI](ml-security-and-intellectual-property.md). 

### Step 2: Building and testing the image locally
<a name="ml-step-2-building-and-testing-the-image-locally-2"></a>

 In the build context, the following files now exist: 
+ `./Dockerfile`
+ `./web_app_serve.py`
+ `./serve`

 Next you can build, run, and test this container image. 

#### Build the image
<a name="ml-build-the-image-2"></a>

 Run the Docker command to build and tag the image. This example uses the tag `my-inference-image`. 

```
sudo docker build --tag my-inference-image ./
```

 After running this Docker command to build the image, you should see output as Docker builds the image based on each line in your `Dockerfile`. When it finishes, you should see something similar to the following. 

```
Successfully built abcdef123456
Successfully tagged my-inference-image:latest
```

#### Run locally
<a name="ml-run-locally-2"></a>

 After your build has completed, you can test the image locally. 

```
sudo docker run \
  --rm \
  --publish 8080:8080/tcp \
  --volume '<path_to_model>:/opt/ml/model:ro' \
  --detach \
  --name my-inference-container \
  my-inference-image \
  serve
```

 The following are command details: 
+  `--rm` – Automatically remove the container after it stops. 
+  `--publish 8080:8080/tcp` – Expose port 8080 to simulate the port SageMaker AI sends HTTP requests to. 
+  `--volume '<path_to_model>:/opt/ml/model:ro'` – Bind mount the path to where the test model artifacts are stored on the host machine as read-only to make them available to your inference code in the container. 
+  `--detach` – Run the container in the background. 
+  `--name my-inference-container` – Give this running container a name. 
+  `my-inference-image` – Run the built image. 
+  `serve` – Run the same script SageMaker AI runs when running the container. 

 After running this command, Docker creates a container from the inference image and runs it in the background. The container runs the `serve` script, which starts your web server for testing purposes. 

#### Test the ping HTTP endpoint
<a name="ml-test-the-ping-http-endpoint-1"></a>

 When SageMaker AI runs your container, it periodically pings the endpoint. When the endpoint returns an HTTP response with status code 200, it signals to SageMaker AI that the container is ready for inference. 

 Run the following command to test the endpoint and include the response header. 

```
curl --include http://127.0.0.1:8080/ping
```

 Example output is shown in the following example. 

```
HTTP/1.0 200 OK
Content-Type: text/html; charset=utf-8
Content-Length: 0
Server: MyServer/0.16.0 Python/3.6.8
Date: Mon, 21 Oct 2019 06:58:54 GMT
```

#### Test the inference HTTP endpoint
<a name="ml-test-the-inference-http-endpoint-1"></a>

 When the container indicates it is ready by returning a 200 status code, SageMaker AI passes the inference data to the `/invocations` HTTP endpoint via a `POST` request. 

 Run the following command to test the inference endpoint. 

```
curl \
  --request POST \
  --data "hello world" \
  http://127.0.0.1:8080/invocations
```

 Example output is shown in the following example.. 

```
{"prediction": "a", "text": "hello world"}
```

 With these two HTTP endpoints working, the inference image is now compatible with SageMaker AI. 

**Note**  
 The model of your algorithm product can be deployed in two ways: real time and batch. For both deployments, SageMaker AI uses the same HTTP endpoints while running the Docker container. 

 To stop the container, run the following command. 

```
sudo docker container stop my-inference-container
```

 After both your training and inference images for your algorithm product are ready and tested, continue to [Uploading your images to Amazon Elastic Container Registry](ml-uploading-your-images.md). 

# Uploading your images to Amazon Elastic Container Registry
<a name="ml-uploading-your-images"></a>

After you create your inference and training images you can upload them to Amazon Elastic Container Registry. [Amazon ECR](https://aws.amazon.com/ecr/) is a fully managed Docker registry. Amazon SageMaker AI pulls images from Amazon ECR to create a model package for inference or an algorithm for training jobs. AWS Marketplace also retrieves these images from Amazon ECR to publish your model package and algorithm products. This topic provides a walkthrough for uploading your inference and training images to Amazon ECR

**Topics**
+ [

## Which images must I upload?
](#ml-which-images-must-i-upload)
+ [

## What IAM permissions are required?
](#ml-what-iam-permissions-are-required)
+ [

## Log your Docker client into AWS
](#ml-log-in-your-docker-client)
+ [

## Create repository and upload image
](#ml-create-repository-and-upload-image)
+ [

## Scan your uploaded image
](#ml-scan-your-uploaded-image)

## Which images must I upload?
<a name="ml-which-images-must-i-upload"></a>

 If you're publishing a model package, upload only an inference image. If you're publishing an algorithm, upload both an inference image and a training image. If the inference and training images are combined, upload the combined image only once. 

## What IAM permissions are required?
<a name="ml-what-iam-permissions-are-required"></a>

 The following steps assume that the local machine has the correct AWS credentials for an AWS Identity and Access Management (IAM) role or user in the seller AWS account. The role or user must have the correct policies in place for both AWS Marketplace and Amazon ECR. For example, you could use the following AWS managed policies: 
+  [https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AWSMarketplaceSellerProductsFullAccess.html](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AWSMarketplaceSellerProductsFullAccess.html) – For access to AWS Marketplace 
+  [https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonEC2ContainerRegistryFullAccess.html](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonEC2ContainerRegistryFullAccess.html) – For access to Amazon ECR 

**Note**  
The links take you to the *AWS Managed Policy Reference*.

## Log your Docker client into AWS
<a name="ml-log-in-your-docker-client"></a>

 Set a variable for the AWS Region that you want to publish from (see [Supported AWS Regions for publishing](ml-service-restrictions-and-limits.md#ml-supported-aws-regions-for-publishing)). For this example, use the US East (Ohio) Region. 

```
region=us-east-2
```

 Run the following command to set a variable with your AWS account ID. This example assumes that the current AWS Command Line Interface (AWS CLI) credentials belong to the seller’s AWS account. 

```
account=$(aws sts get-caller-identity --query Account --output text)
```

 To authenticate your Docker CLI client with your AWS account Amazon ECR Docker registry for your Region, run the following command.

```
aws ecr get-login-password \
--region ${region} \
| sudo docker login \
--username AWS \
--password-stdin \
${account}.dkr.ecr.${region}.amazonaws.com
```

## Create repository and upload image
<a name="ml-create-repository-and-upload-image"></a>

 Set a variable for the tag of the uploaded image and another variable for the name of the uploaded image repository. 

```
image=my-inference-image
repo=my-inference-image
```

**Note**  
 In previous sections of this guide where the inference and training images were built, they were tagged as **my-inference-image** and **my-training-image**, respectively. For this example, create and upload the inference image to a repository with the same name. 

 Run the following command to create the image repository in Amazon ECR. 

```
aws ecr --region ${region} create-repository --repository-name "${repo}"
```

 The full name of the Amazon ECR repository location is made up of the following parts: ` <account-id>.dkr.ecr.<region>.amazonaws.com/<image-repository-name>` 

 To push the image to the repository, you must tag it with the full name of the repository location. 

 Set a variable for the full name of the image repository location along with the `latest` tag. 

```
fullname="${account}.dkr.ecr.${region}.amazonaws.com/${repo}:latest"
```

 Tag the image with the full name. 

```
sudo docker tag ${image} ${fullname}
```

 Finally, push the inference image to the repository in Amazon ECR. 

```
sudo docker push ${fullname}
```

 After the upload completes, the image appears in the [repository list of the Amazon ECR console](https://console.aws.amazon.com/ecr/repositories?region=us-east-2) in the Region that you are publishing from. In the previous example, the image was pushed to a repository in the US East (Ohio) Region. 

## Scan your uploaded image
<a name="ml-scan-your-uploaded-image"></a>

 In the [Amazon ECR console](https://console.aws.amazon.com/ecr/repositories?region=us-east-2), choose the AWS Region that you are publishing from, and open the repository that the image was uploaded to. Select your uploaded image and start a scan to check for known vulnerabilities. AWS Marketplace checks the Amazon ECR scan results of the container images used in your Amazon SageMaker AI resource before publishing it. Before you can create your product, you must fix container images that have vulnerabilities with either a Critical or High severity. 

 After your images are scanned successfully, they can be used to create a model package or algorithm resource. 

If you believe that your product had errors in the scan that are false positives, contact the [AWS Marketplace Seller Operations](https://aws.amazon.com/marketplace/management/contact-us) team with information about the error.

 **Next steps** 
+  See size limits in [Requirements and best practices for creating machine learning products](ml-listing-requirements-and-best-practices.md) 
+  Continue to [Creating your Amazon SageMaker AI resource](ml-creating-your-amazon-sagemaker-resource.md) 

# Creating your Amazon SageMaker AI resource
<a name="ml-creating-your-amazon-sagemaker-resource"></a>

 To publish a model package or an algorithm product, you must create the respective [ model package resource](https://docs.aws.amazon.com/marketplace/latest/userguide/ml-creating-your-amazon-sagemaker-resource.html#ml-creating-your-model-package-product) or [ algorithm resource](https://docs.aws.amazon.com/marketplace/latest/userguide/ml-creating-your-amazon-sagemaker-resource.html#ml-creating-your-algorithm-product) in Amazon SageMaker AI. When you create your resource for an AWS Marketplace product, it must be certified through a validation step. The validation step requires that you provide data to test your model package or algorithm resource before it can be published. The following sections show you how to create your SageMaker AI resource, either a model package resource or an algorithm resources. This includes setting the validation specifications that tell SageMaker AI how to perform the validation. 

**Note**  
If you haven't yet created the images for your product and uploaded them to Amazon Elastic Container Registry (Amazon ECR), see [Packaging your code into images for machine learning products in AWS Marketplace](ml-packaging-your-code-into-images.md) and [Uploading your images to Amazon Elastic Container Registry](ml-uploading-your-images.md) for information about how to do so.

**Topics**
+ [

## Creating your model package
](#ml-creating-your-model-package-product)
+ [

## Creating your algorithm
](#ml-creating-your-algorithm-product)

## Creating your model package
<a name="ml-creating-your-model-package-product"></a>

 The following are requirements for creating a model package for AWS Marketplace: 
+  An inference image stored in [Amazon ECR](https://aws.amazon.com/ecr/) 
+  (Optional) Model artifacts, stored separately in [Amazon S3](https://aws.amazon.com/s3/) 
+ Your test data used for inferences, stored in Amazon Simple Storage Service 

**Note**  
 The following is about creating a model package product. For more information about model packages in SageMaker AI, see [Create a Model Package Resource](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-mkt-create-model-package.html). 

### Creating the model package resources
<a name="ml-create-model-package"></a>

The following procedures step you through creating the model package resources.

**Step 1: To create the model package resources**

1. Open the [ Amazon SageMaker AI console](https://us-east-2.console.aws.amazon.com/sagemaker/home).

1. Ensure that you are in the AWS Region that you want to publish from by looking at the top right of the page. For publishing, see the [Supported AWS Regions for publishing](ml-service-restrictions-and-limits.md#ml-supported-aws-regions-for-publishing) section. The inference image you uploaded to Amazon ECR in previous steps must be in the same Region. 

1. In the left navigation menu, choose **Model packages**.

1. Choose **Create model package**.

After you create the package, you need to set the specifications of the inference package.

**Step 2: To set inference specifications**

1.  Provide a **Name** for your model package (for example, *my-model-package*). 

1.  For **Location of inference image**, enter the URI of your inference image that was uploaded to Amazon ECR. You can retrieve the URI by locating your image in the [Amazon ECR console](https://us-east-2.console.aws.amazon.com/ecr/repositories). 

1.  If your model artifacts from training are bundled with your logic in your inference image, leave the **Location of model data artifacts** empty. Otherwise, specify the full Amazon S3 location of the compressed file (.tar.gz) of your model artifacts. 

1.  Using the dropdown box, choose the supported instance types of your inference image for both real-time inference (also known as *endpoint*) and batch-transform jobs. 

1.  Choose **Next**. 

 Before your model package can be created and published, validation is necessary to ensure that it functions as expected. This requires that you run a batch transform job with test data for inference that you provide. The validation specifications tell SageMaker AI how to perform the validation. 

**Step 3: To set validation specifications**

1.  Set **Publish this model package in AWS Marketplace** to **Yes**. If you set this to **No**, you can't publish this model package later. Choosing **Yes** [ certifies](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateModelPackage.html#sagemaker-CreateModelPackage-request-CertifyForMarketplace) your model package for AWS Marketplace and requires the validation step. 

1.  If this is the first time completing this process, choose **Create a new role** for the **IAM role**. Amazon SageMaker AI uses this role when it deploys your model package. This includes actions, such as pulling images from Amazon ECR and artifacts from Amazon S3. Review the settings, and choose **Create role**. Creating a role here grants permissions described by the [ AmazonSageMakerFullAccess](https://console.aws.amazon.com/iam/home#/policies/arn:aws:iam::aws:policy/AmazonSageMakerFullAccess) IAM policy to the role that you create. 

1.  Edit the **JSON** in the validation profile. For details about allowed values, see [TransformJobDefinition](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_TransformJobDefinition.html). 

   1.  `TransformInput.DataSource.S3Uri`: Set to where your test data for inference is stored. 

   1.  `TransformInput.ContentType`: Specify your test data content type (for example, `application/json`, `text/plain`, `image/png `, or any other value). SageMaker AI does not validate the actual input data. This value is passed to your container HTTP endpoint in the `Content-type` header value. 

   1.  `TransformInput.CompressionType`: Set to `None` if your test data for inference in Amazon S3 is not compressed. 

   1.  `TransformInput.SplitType`: Set to `None` to pass each object in Amazon S3 as a whole for inference. 

   1.  `TransformOutput.S3OutputPath`: Set to the location that the inference output is stored. 

   1.  `TransformOutput.AssembleWith`: Set to `None` to output each inference as separate objects in Amazon S3. 

1.  Choose **Create model package**. 

 SageMaker AI pulls the inference image from Amazon ECR, copies any artifacts to the inference container, and runs a batch transform job using your test data for inference. After the validation succeeds, the status changes to **Completed**. 

**Note**  
 The validation step does not evaluate the accuracy of the model with your test data. The validation step checks if the container runs and responds as expected. 

 You have completed creating your model product resources. Continue to [Listing your product in AWS Marketplace](ml-publishing-your-product-in-aws-marketplace.md). 

## Creating your algorithm
<a name="ml-creating-your-algorithm-product"></a>

 The following are requirements for creating an algorithm for AWS Marketplace: 
+ An inference image, stored in Amazon ECR 
+ A training image, stored in Amazon ECR 
+  Your test data for training, stored in Amazon S3 
+ Your test data for inference, stored in Amazon S3 

**Note**  
 The following walkthrough creates an algorithm product. For more information, see [Create an Algorithm Resource](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-mkt-create-algo.html). 

### Creating the algorithm resources
<a name="ml-create-algorithm"></a>

The following procedures step you through creating the resources in your algorithm package.

**Step 1: To create the algorithm resources**

1.  Open the [ Amazon SageMaker AI console](https://us-east-2.console.aws.amazon.com/sagemaker/home). 

1.  Ensure that you are in the AWS Region that you want to publish from by looking at the top right of the page (see [Supported AWS Regions for publishing](ml-service-restrictions-and-limits.md#ml-supported-aws-regions-for-publishing)). The training and inference images you uploaded to Amazon ECR in previous steps must be in this same Region. 

1.  In the left navigation menu, choose **Algorithms**. 

1.  Choose **Create algorithm**. 

After you have created the algorithm package, you must set the specifications for the training and tuning of your model.

**Step 2: To set the training and tuning specifications**

1.  Enter the **Name** for your algorithm (for example, *my-algorithm*). 

1.  For **Training image**, paste the full URI location of your training image that was uploaded to Amazon ECR. You can retrieve the URI by locating your image in the [Amazon ECR console](https://us-east-2.console.aws.amazon.com/ecr/repositories). 

1.  Using the dropdown box, choose the **instance types for training** that your training image supports. 

1.  Under the **Channel specification** section, add a channel for each input dataset that your algorithm supports, up to 20 channels of input sources. For more information, see [ Input Data Configuration](https://docs.aws.amazon.com/sagemaker/latest/dg/your-algorithms-training-algo-running-container.html#your-algorithms-training-algo-running-container-inputdataconfig). 

1.  Choose **Next**. 

1. If your algorithm supports hyperparameters and hyperparameter tuning, you must specify the tuning parameters.

1.  Choose **Next**. 

**Note**  
 We highly recommend that your algorithm supports hyperparameter tuning and makes appropriate parameters tunable. This allows data scientists to tune models to get the best results. 

After you have set the tuning parameters, if any, you must set the specifications for your inference image.

**Step 3: To set inference image specification**

1.  For **Location of inference image**, paste the URI of the inference image that was uploaded to Amazon ECR. You can retrieve the URI by locating your image in the [Amazon ECR Console](https://us-east-2.console.aws.amazon.com/ecr/repositories). 

1.  Using the dropdown box, choose the supported instance types for your inference image for both real-time inference (also known as *endpoint*) and batch-transform jobs. 

1.  Choose **Next**. 

 Before your algorithm can be created and published, validation is necessary to ensure that it functions as expected. This requires that you run both a training job with test data for training and a batch transform job with test data for inference that you provide. The validation specifications tell SageMaker AI how to perform the validation. 

**Step 4: To set validation specifications**

1.  Set **Publish this algorithm in AWS Marketplace** to **Yes**. If you set this to **No**, you can't publish this algorithm later. Choosing **Yes** [ certifies](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateAlgorithm.html#sagemaker-CreateAlgorithm-request-CertifyForMarketplace) your algorithm for AWS Marketplace and requires the validation specification.

1.  If this is your first time creating a machine learning package for AWS Marketplace, choose **Create a new role** for the **IAM role**. Amazon SageMaker AI uses this role when training your algorithm and deploying the subsequent model package. This includes actions such as pulling images from Amazon ECR, storing artifacts in Amazon S3, and copying training data from Amazon S3. Review the settings, and choose **Create role**. Creating a role here grants permissions described by the [ AmazonSageMakerFullAccess](https://console.aws.amazon.com/iam/home#/policies/arn:aws:iam::aws:policy/AmazonSageMakerFullAccess) IAM policy to the role that you create. 

1.  Edit the **JSON** file in the validation profile for **Training job definition**. For more information about allowed values, see [ TrainingJobDefinition](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_TrainingJobDefinition.html). 

   1.  `InputDataConfig`: In this JSON array, add a [Channel object](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_Channel.html) for each channel that you specified in the training-specification step. For each channel, specify where your test data for training is stored. 

   1.  `OutputDataConfig`: After the training completes, the model artifacts in the training container directory path `/opt/ml/model/` are compressed and copied out to Amazon S3. Specify the Amazon S3 location of where the compressed file (.tar.gz) is stored. 

1.  Edit the JSON file in the validation profile for **Transform job definition**. For more information about allowed values, see [ TransformJobDefinition](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_TransformJobDefinition.html). 

   1.  `TransformInput.DataSource.S3Uri`: Set to where your test data for inference is stored. 

   1.  `TransformInput.ContentType`: Specify your test data content type. For example, `application/json`, `text/plain`, `image/png`, or any other value. Amazon SageMaker AI does not validate the actual input data. This value is passed to your container HTTP endpoint in the `Content-type` header value. 

   1.  `TransformInput.CompressionType`: Set to `None` if your test data for inference in Amazon S3 is not compressed. 

   1.  `TransformInput.SplitType`: Choose how you want objects in S3 split. For example, `None` passes each object in Amazon S3 as a whole for inference. For more details, see [ SplitType](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_TransformInput.html#sagemaker-Type-TransformInput-SplitType) in the Amazon SageMaker AI API Reference. 

   1.  `TransformOutput.S3OutputPath`: Set to the location where the inference output is stored. 

   1.  `TransformOutput.AssembleWith`: Set to `None` to output each inference as separate objects in Amazon S3. 

1. Choose **Create algorithm package**.

 SageMaker AI pulls the training image from Amazon ECR, runs a test-training job using your data, and stores the model artifacts in Amazon S3. It then pulls the inference image from Amazon ECR, copies the artifacts from Amazon S3 into the inference container, and runs a batch transform job using your test data for inference. After the validation succeeds, the status changes to **Completed**. 

**Note**  
 The validation step does not evaluate the accuracy of the training or the model with your test data. The validation step checks if the containers run and respond as expected.   
The validation step only validates batch processing. It is up to you to validate that real-time processing works with your product.

 You have completed creating your algorithm product resources. Continue to [Listing your product in AWS Marketplace](ml-publishing-your-product-in-aws-marketplace.md). 

# Listing your product in AWS Marketplace
<a name="ml-publishing-your-product-in-aws-marketplace"></a>

After you package your code into model package images or algorithm images, upload your images, and create your Amazon SageMaker AI resources, you can publish your machine learning product in AWS Marketplace. The following sections walk you through the publishing process, which includes creating your product listing, testing your product, and signing off for publishing. Once your product is published, you request changes to update your listing. For more information, see [Managing your machine learning products](ml-product-management.md). 

**Topics**
+ [

# Prerequisites
](ml-publishing-prereq.md)
+ [

# Step 1: Create a new listing
](create-new-listing.md)
+ [

# Step 2: Provide product information
](provide-general-info.md)
+ [

# Step 3: Add initial product version
](add-initial-version.md)
+ [

# Step 4: Configure the pricing model
](set-pricing-model.md)
+ [

# Step 5: Configure refund policy
](configure-refund-policy.md)
+ [

# Step 6: Configure EULA
](configure-eula.md)
+ [

# Step 7: Configure allowlist
](configure-allowlist.md)

# Prerequisites
<a name="ml-publishing-prereq"></a>

Before you can publish your model package or algorithm in AWS Marketplace, you must have the following:
+  An AWS account that is registered as an AWS Marketplace seller. You can do this in the [AWS Marketplace Management Portal](https://aws.amazon.com/marketplace/management/). 
+  A completed seller profile under the [Settings](https://aws.amazon.com/marketplace/management/seller-settings) page in the AWS Marketplace Management Portal. 
+  For publishing paid products, you must complete the tax interview and bank forms. This is not required for publishing free products. For more information, see [Seller registration process](https://docs.aws.amazon.com/marketplace/latest/userguide/registration-process.html). 
+ You must have permissions to access the AWS Marketplace Management Portal and Amazon SageMaker AI. For more information, see [Required permissions](#ml-permissions-required).

## Required permissions
<a name="ml-permissions-required"></a>

To publish an Amazon SageMaker AI product, you must specify a valid IAM role ARN that has a trust relationship with the AWS Marketplace service principal. Additionally, the IAM user or role you are signed in as requires the necessary permissions.

**Setting sign-in permissions**
+  Add the following permissions to the IAM role: 

  1. **sagemaker:DescribeModelPackage** — For listing a model package 

  1.  **sagemaker:DescribeAlgorithm** — For listing an algorithm 

------
#### [ JSON ]

****  

     ```
     { 
         "Version":"2012-10-17",		 	 	 
         "Statement": [ 
             { 
                 "Effect": "Allow", 
                 "Action": [ 
                     "sagemaker:DescribeModelPackage", 
                     "sagemaker:DescribeAlgorithm"
                 ],
                 "Resource": "*"  
            }
         ] 
     }
     ```

------

**Setting the IAM role AddVersion/Create product**

1. Follow the steps to create a role with a custom trust policy. For more information, see [Creating an IAM role using a custom trust policy (console)](https://docs.aws.amazon.com//IAM/latest/UserGuide/id_roles_create_for-custom.html).

1. Enter the following for the custom trust policy statement:

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Sid": "Statement1",
               "Effect": "Allow",
               "Principal": {
                   "Service": "assets.marketplace.amazonaws.com"
               },
               "Action": "sts:AssumeRole"
           }
       ]
   }
   ```

------

1. Enter the following permissions policy:

------
#### [ JSON ]

****  

   ```
   { 
       "Version":"2012-10-17",		 	 	 
       "Statement": [ 
           { 
               "Effect": "Allow", 
               "Action": [ 
                   "sagemaker:DescribeModelPackage", 
                   "sagemaker:DescribeAlgorithm"
               ],
               "Resource": "*"  
          }
       ] 
   }
   ```

------

1. Provide the role ARN when requested. The role should follow the format: `arn:aws:iam::<account-id>:role/<role-name>`.

 For the AWS Marketplace permissions needed, or for managing your seller account, see [Policies and permissions for AWS Marketplace sellers](https://docs.aws.amazon.com/marketplace/latest/userguide/detailed-management-portal-permissions.html). 

## Required assets
<a name="ml-required-assets"></a>

Before creating a machine learning product listing, ensure that you have the following required assets:
+ **Amazon Resource Name (ARN)** — Provide the ARN of the model package or algorithm resource in the AWS Region that you are publishing from (see [Supported AWS Regions for publishing](ml-service-restrictions-and-limits.md#ml-supported-aws-regions-for-publishing)). 
  +  An ARN for a model package has this form: `arn:aws:sagemaker:<region>:<account-id>:model-package/<model-package-name>` 

     To find your model package ARN, see [My marketplace model packages](https://console.aws.amazon.com/sagemaker/home#/model-packages/my-resources). 
  +  An ARN for an algorithm has this form: `arn:aws:sagemaker:<region>:<account-id>:algorithm/<algorithm-name>` 

     To find your algorithm resource ARN, see [My algorithms](https://console.aws.amazon.com/sagemaker/home#/algorithms/my-resources). 
+ [Requirements for usage information](ml-listing-requirements-and-best-practices.md#ml-requirements-for-usage-information) — Provide details about inputs, outputs, and code examples. 
+  [Requirements for inputs and outputs](ml-listing-requirements-and-best-practices.md#ml-requirements-for-inputs-and-outputs) — Provide either files or text. 
+ [Requirements for Jupyter notebook](ml-listing-requirements-and-best-practices.md#ml-requirements-for-jupyter-notebook) — Demonstrate complete product usage. 

# Step 1: Create a new listing
<a name="create-new-listing"></a>

 To get started with a machine learning product, you'll initiate the listing process by setting the product name, adding optional resource tags for organization, and generating the product ID. The product ID is used to track your product throughout its lifecycle. 

**Note**  
 Before creating your listing, ensure that you have the required resources specified in [Requirements and best practices for creating machine learning products](ml-listing-requirements-and-best-practices.md). 

1. Sign in to your seller AWS account and go to the [AWS Marketplace Management Portal](https://aws.amazon.com/marketplace/management). 

1.  In the top menu, go to **Products** and then choose **Machine learning**. 

1.  Choose **Create machine learning product**.

1. Under **Product name**, enter a unique product name that will be displayed to buyers at the top of the product listing page and in search results.

1.  (Optional) Under **Tags**, enter any tags you want to associate with the product. For more information, see [Tagging AWS resources](https://docs.aws.amazon.com/tag-editor/latest/userguide/tagging.html). 

1.  Under **Product ID and code**, choose **Generate product ID and code**. 

1.  Choose **Continue to wizard**. You'll start the process of adding detailed product information in the wizard. 

# Step 2: Provide product information
<a name="provide-general-info"></a>

 When listing your machine learning product in AWS Marketplace, providing comprehensive and accurate product information is crucial. Use the **Provide product information** step in the wizard to capture essential details about your offering such as product categories and support information. 

1.  Enter information about your product. 

1.  Choose **Next** to move to the next step in the wizard. 

# Step 3: Add initial product version
<a name="add-initial-version"></a>

 This page guides you through adding the initial version of your product. Your product may have multiple versions throughout its lifecycle, and each version is identified by a unique SageMaker AI ARN. 

1.  Under **Amazon Resource Names (ARNs)**: 

   1.  Enter the model or algorithm Amazon SageMaker AI ARN. 
      +  Example model package ARN: `arn:aws:sagemaker:<region>:<account-id>:model-package/<model-package-name>` 

         To find your model package ARN, see [My marketplace model packages](https://console.aws.amazon.com/sagemaker/home#/model-packages/my-resources). 
      +  Example algorithm ARN: `arn:aws:sagemaker:<region>:<account-id>:algorithm/<algorithm-name>` 

         To find your algorithm resource ARN, see [My algorithms](https://console.aws.amazon.com/sagemaker/home#/algorithms/my-resources). 

   1.  Enter the IAM access role ARN. 

       Example IAM ARN: `arn:aws:iam::<account-id>:role/<role-name>` 

1.  Under **Version information**, enter a **Version name** and **Release notes.**. 

1.  Under **Model input details**, enter a summary of the model inputs and provide sample input data for real-time and batch job inputs. Optionally, you can provide any input limitations. 

1.  (Optional) Under **Input parameters**, provide detailed information about each input parameter supported by your product. You can provide the parameter name, a description, constraints, and specify if the parameter is required or optional. You can provide up to 24 input parameters. 

1.  (Optional) Under **Custom attributes**, provide any custom invocation parameters supported by your product. For each attribute, you can provide a name, description, constraints, and specify if the attribute is required or optional. 

1.  Under **Model output details**, enter a summary of the model outputs and provide sample output data for real-time and batch job outputs. Optionally, you can provide any output limitations. 

1.  (Optional) Under **Output parameters**, provide detailed information about each output parameter supported by your product. You can provide the parameter name, a description, constraints, and specify if the parameter is required or optional. You can provide up to 24 output parameters. 

1.  Under **Usage instructions**, provide clear instructions for using your model effectively such as best practices, how to handle common edge cases, or performance optimization suggestions. 

1.  Under **Git repository and notebook links**, provide links to example notebooks and Git repository. Sample notebooks should include how to invoke your model. Your Git repository should include notebooks, data files, and other developer tools. 

1.  Under **Recommended instance types**, select the recommended instance types for your product. 

   For *model packages*, you'll select recommended instance types for both batch transform and real-time inference.

   For *algorithm packages*, you'll select the recommended instance type for training jobs.
**Note**  
 The instance types available to select are limited to those supported by your model or algorithm package. These supported instance types were determined when you initially created your resources in Amazon SageMaker AI. This ensures that your product is only associated with hardware configurations that can effectively run your machine learning solution. 

1. Choose **Next** to move to the next step in the wizard.

# Step 4: Configure the pricing model
<a name="set-pricing-model"></a>

 When configuring your product's pricing model, you can offer your product for free or implement usage-based pricing. Your pricing model cannot be changed after you've published the product. 

1.  Choose a pricing model. Batch transform and algorithm training products can only be free or charged for hourly usage. 
   +  If you chose to offer your product for free, choose **Next** and continue the wizard. 
   +  If you chose usage pricing, continue these steps. 

1.  If you chose to charge based on usage, you can enter usage costs. You can choose to enter a price that applies to all instance types or enter a price per instance type for more granular pricing. 

1.  Select **Yes, offer a free trial** if you'd like to offer a free trial of your product. 

1. Choose **Next** to move to the next step in the wizard.

# Step 5: Configure refund policy
<a name="configure-refund-policy"></a>

 Though you're not required to offer refunds, you must file an official refund policy with AWS Marketplace. 

1. Enter a refund policy.

1.  Choose **Next** to move to the next step in the wizard. 

# Step 6: Configure EULA
<a name="configure-eula"></a>

 In this step, you'll choose the legal agreement that will govern how customers can use your product. You can either select AWS's standard contract terms or upload your own custom end-user license agreement (EULA). 

1.  Select either the standard contract or provide a custom end-user license agreement. 

1.  Choose **Next** to move to the next step in the wizard. 

# Step 7: Configure allowlist
<a name="configure-allowlist"></a>

 Before submitting your product, you'll need to specify which AWS accounts can access it. This optional step controls the initial visibility of your product, limiting access to your own account and any specifically authorized AWS accounts you add to the allowlist. 

1.  Enter the AWS account IDs you want to access your product.

1.  Choose **Submit** to submit your product. 

    Your product will have the **Limited visibility** status and will only be visible to the AWS account that created the product and other allow-listed AWS accounts. 

    For more information on statuses, see [Machine learning product status](ml-product-lifecycle.md#ml-product-status). 

 You can view and test your product listing while it's in **Limited visibility**. When you're ready to change the visibility of your product, see [Updating product visibility](ml-update-visibility.md). 

# Managing your machine learning products
<a name="ml-product-management"></a>

In the AWS Marketplace Management Portal, choose **Request changes** to modify a product or version in AWS Marketplace. When you submit your changes, the system processes them. Processing time varies from minutes to days, depending on the type of modification. You can monitor the status of your changes in the AWS Marketplace Management Portal. 

**Topics**
+ [

# Updating product information
](ml-update-product.md)
+ [

# Updating product visibility
](ml-update-visibility.md)
+ [

# Updating the allowlist
](ml-update-allowlist.md)
+ [

# Managing product versions
](ml-manage-product-version.md)
+ [

# Updating product pricing
](ml-update-public-offer.md)
+ [

# Updating your refund policy
](ml-update-refund-policy.md)
+ [

# Updating your EULA
](ml-update-eula.md)
+ [

# Removing a product
](ml-remove-a-product.md)

**Note**  
 In addition to making changes through the AWS Marketplace Management Portal, you can also make changes using the [AWS Marketplace Catalog API](https://docs.aws.amazon.com/marketplace/latest/APIReference/welcome.html). 

# Updating product information
<a name="ml-update-product"></a>

 After creating your machine learning (ML) product, you can modify certain product information in AWS Marketplace, such as descriptions, highlights, title, SKU, categories, and keywords. 

1.  Sign in to your seller account in the [AWS Marketplace Management Portal](https://aws.amazon.com/marketplace/management/tour/). 

1.  Go to the **Machine learning products** page and select your target product. 

1.  Choose **Request changes** and select **Update product information**. 

1.  Update the fields as needed. 
**Note**  
 For logo specifications, see [Company and product logo requirements](product-submission.md#seller-and-product-logos). 

1.  Choose **Submit**. 

 You can monitor your request from the **Requests** tab of the **Machine learning** products page. For more information on statuses, see [Machine learning product status](ml-product-lifecycle.md#ml-product-status). 

# Updating product visibility
<a name="ml-update-visibility"></a>

1.  Sign in to your seller account in the [AWS Marketplace Management Portal](https://aws.amazon.com/marketplace/management/tour/). 

1.  Go to the **Machine learning product** page and select your product. 

1.  Choose **Request changes**, select **Update product visibilty**, and then select **Public** or **Restricted**. 

1.  Review your changes and choose **Submit**. 

 You can monitor your request from the **Requests** tab of the **Machine learning** products page. For more information on statuses, see [Machine learning product status](ml-product-lifecycle.md#ml-product-status). 

# Updating the allowlist
<a name="ml-update-allowlist"></a>

1.  Sign in to your seller account in the [AWS Marketplace Management Portal](https://aws.amazon.com/marketplace/management/tour/). 

1.  Go to the **Machine learning product** page and select your product. 

1.  Choose **Request changes** and select **Update allowlist**. 

1.  Modify the information you need to change and choose **Submit**. For more information, see [Step 7: Configure allowlist](configure-allowlist.md). 

 You can monitor your request from the **Requests** tab of the **Machine learning** products page. For more information on statuses, see [Machine learning product status](ml-product-lifecycle.md#ml-product-status). 

# Managing product versions
<a name="ml-manage-product-version"></a>

 As a seller, you can manage your product versions in AWS Marketplace by updating existing version information, adding new versions, or removing versions that are no longer supported. Each version has a unique SageMaker AI ARN and associated information that buyers use to evaluate and deploy your product. 

**Note**  
 Before adding versions, create a product ID and establish pricing. For more information, see [Step 1: Create a new listing](create-new-listing.md). 

## Updating version information
<a name="ml-updating-versions"></a>

 After creating a version, you can modify its associated information such as release notes, usage instructions, and instance recommendations. 

**Note**  
 Version names and ARNs cannot be modified. These changes require creating a new version. 

1.  Sign in to your seller account in the [AWS Marketplace Management Portal](https://aws.amazon.com/marketplace/management/tour/). 

1.  Go to the **Machine learning product** page and select your product. 

1.  Choose **Request changes** and select **Update version information**. 

1.  Select the version you want to update. 

1.  Choose **Edit version**. 

1.  Modify the necessary fields and choose **Next**. 

1.  Enter your pricing information and choose **Submit**. For more information, see [Step 4: Configure the pricing model](set-pricing-model.md). 

 You can monitor your request from the **Requests** tab of the **Machine learning** products page. For more information on statuses, see [Machine learning product status](ml-product-lifecycle.md#ml-product-status). 

## Adding new versions
<a name="ml-adding-new-versions"></a>

 You can add new versions of your product to introduce features, updates, or improvements while maintaining access to previous versions. 

1.  Sign in to your seller account in the [AWS Marketplace Management Portal](https://aws.amazon.com/marketplace/management/tour/). 

1.  Go to the **Machine learning product** page and select your product. 

1.  Choose **Versions** and choose **Add new version**. 

1.  Enter information for the new version following the steps in [Step 3: Add initial product version](add-initial-version.md). 

1.  Enter your pricing information and choose **Submit**. For more information, see [Step 4: Configure the pricing model](set-pricing-model.md). 

When you have successfully added a new version, buyers receive an email notification that a new version is available.

## Restricting versions
<a name="ml-restricting-versions"></a>

 When a version becomes outdated or you want to discontinue its availability, you can restrict buyer access to that version while maintaining access to other versions. 

1.  Sign in to your seller account in the [AWS Marketplace Management Portal](https://aws.amazon.com/marketplace/management/tour/). 

1.  Go to the **Machine learning product** page and select your product. 

1.  Choose **Versions** and choose **Restrict versions**. 
**Note**  
 You must always have at least one version available. 

1.  Choose **Submit**. 

 When you have successfully restricted a version, buyers receive an email notification that the version was restricted. 

# Updating product pricing
<a name="ml-update-public-offer"></a>

 You can modify the rates and free trial period of your machine learning product in AWS Marketplace, though the pricing model itself cannot be changed. Note that for paid models, price increases take effect after a 90-day notice period, on the first day of the following month. Additional price changes cannot be made during this notice period. 

1.  Sign in to your seller account in the [AWS Marketplace Management Portal](https://aws.amazon.com/marketplace/management/tour/). 

1.  Go to the **Machine learning product** page and select your product. 

1.  Choose **Request changes**, select **Update public offer**, and then select **Edit offer information**. 

1.  Modify the information you need to change and choose **Submit**. 

 You can monitor your request from the **Requests** tab of the **Machine learning** products page. For more information on statuses, see [Machine learning product status](ml-product-lifecycle.md#ml-product-status). 

# Updating your refund policy
<a name="ml-update-refund-policy"></a>

1.  Sign in to your seller account in the [AWS Marketplace Management Portal](https://aws.amazon.com/marketplace/management/tour/). 

1.  Go to the **Machine learning product** page and select your product. 

1.  Choose **Request changes**, select **Update public offer**, and then select **Update refund policy**. 

1.  Modify the information you need to change and choose **Submit**. 

 You can monitor your request from the **Requests** tab of the **Machine learning** products page. For more information on statuses, see [Machine learning product status](ml-product-lifecycle.md#ml-product-status). 

# Updating your EULA
<a name="ml-update-eula"></a>

1.  Sign in to your seller account in the [AWS Marketplace Management Portal](https://aws.amazon.com/marketplace/management/tour/). 

1.  Go to the **Machine learning product** page and select your product. 

1.  Choose **Request changes**, select **Update public offer**, and then select **Update EULA**. 

1.  Modify the information you need to change and choose **Submit**. 

 You can monitor your request from the **Requests** tab of the **Machine learning** products page. For more information on statuses, see [Machine learning product status](ml-product-lifecycle.md#ml-product-status). 

# Removing a product
<a name="ml-remove-a-product"></a>

 You can remove (sunset) your published product from AWS Marketplace. Once removed, new customers cannot subscribe, but you must support existing customers for at least 90 days. 

 The following are the conditions of removing a product from AWS Marketplace: 
+  The product will be removed from AWS Marketplace search and discovery tools. 
+  Subscribe functionality will be disabled. 
+  The product details page remains accessible via direct URL. 
+  Current subscribers retain access until they cancel their subscription. 
+  AWS Marketplace notifies current buyers about the removal. 

**To remove your machine learning product:**

1.  Sign in to your seller account in the [AWS Marketplace Management Portal](https://aws.amazon.com/marketplace/management/tour/). 

1.  Go to the **Machine learning product** page and select your product. 

1.  Choose **Request changes**, select **Update product visibility**, and then select **Restricted**. 

1.  (Optional) Enter a replacement product ID. 

1.  Review the changes and then choose **Submit**. 

 You can monitor your request from the **Requests** tab of the **Machine learning** products page. For more information on statuses, see [Machine learning product status](ml-product-lifecycle.md#ml-product-status). 

 Once removed, the product appears in the **Current products** list where you can only download product spreadsheets. If you have questions about removing products, contact the [AWS Marketplace Seller Operations](https://aws.amazon.com/marketplace/management/contact-us/) team. 

# Creating private offers for machine learning products
<a name="machine-learning-private-offers"></a>

 You can negotiate and offer a private offer directly to customers for your machine learning products. For more information on private offers, see [Preparing a private offer for your AWS Marketplace product](private-offers-overview.md). 

**Prerequisites:**
+  You must have a paid listing in AWS Marketplace. 
+  You must have access to the AWS Marketplace Management Portal (AMMP). 

**To create a private offer for a machine learning product:**

1.  Sign in to the AWS Marketplace Management Portal. 

1. Choose **Offers**, and then choose **Create private offer**

1.  On the **Create private offer** page, select the product that you want to create a private offer for. You can only create offers for available products. 

1.  On the **Offer details** page: 

   1.  Enter the offer name and description. 

   1.  Select the renewal option. 

   1.  Set the offer expiration date. Offers expire at 23:59:59 UTC on the set date. 

1. Choose **Next** twice.

1.  On the **Configure offer pricing and duration** page, specify: 
   +  Pricing option

     (For more information, see [Private offers for ML products](https://docs.aws.amazon.com/marketplace/latest/userguide/private-offers-supported-product-types.html#ml-private-offers))
   + Usage or contract duration
   + Offer currency
   + Pricing dimensions.

     (For usage pricing, the usage based rates only apply during the offer term. For contracts, the usage based rates only apply when the contract term expires and are perpetual.)
**Note**  
For more information on installment plans, see [Private offer installment plans](installment-plans.md). 

1. Choose **Next**.

1. On the **Add buyers** page, enter the AWS account IDs for your buyers. Then choose **Next**. 
**Important**  
For linked accounts to benefit from a private offer:  
Include the payer AWS account ID.
The payer account must accept the hourly terms of the private offer first.
After the payer account accepts, linked accounts can then accept the private offer.

1. On the **Configure legal terms and offer documents** page, add any custom terms, then choose **Next**.
**Note**  
 You can add up to five files (legal terms, statement of work, bill of materials, pricing sheet, or addendums). The system combines these into one document. 

1. On the **Review and create** page, verify the offer details and choose **Create offer**.

1. After the offer appears on the **Manage private offers** page, open the **Actions** menu, choose **Copy offer URL**, and email it to the buyer.
**Note**  
 Offers may take time to publish. You can edit offers on the **Manage private offers** page until a buyer accepts. 

# Requirements and best practices for creating machine learning products
<a name="ml-listing-requirements-and-best-practices"></a>

It is important that your buyers find it easy to test your model package and algorithm products. The following sections describe best practices for ML products. For a complete summary of requirements and recommendations, see the [Summary of requirements and recommendations for ML product listings](#ml-summary-table-of-requirements-and-recommendations).

**Note**  
An AWS Marketplace representative might contact you to help you meet these requirements if your published products don't meet them.

**Topics**
+ [

## General best practices for ML products
](#ml-general-best-practices)
+ [

## Requirements for usage information
](#ml-requirements-for-usage-information)
+ [

## Requirements for inputs and outputs
](#ml-requirements-for-inputs-and-outputs)
+ [

## Requirements for Jupyter notebook
](#ml-requirements-for-jupyter-notebook)
+ [

## Summary of requirements and recommendations for ML product listings
](#ml-summary-table-of-requirements-and-recommendations)

## General best practices for ML products
<a name="ml-general-best-practices"></a>

 Provide the following information for your machine learning product: 
+  For product descriptions, include the following: 
  +  What your model does 
  +  Who the target customer is 
  +  What the most important use case is 
  +  How your model was trained or the amount of data that was used 
  +  What the performance metrics are and the validation data used 
  +  If medical, whether or not your model is for diagnostic use 
+ By default, machine learning products are configured to have public visibility. However, you can create a product with limited visibility. For more information, see [Step 7: Configure allowlist](configure-allowlist.md).
+  (Optional) For paid products, offer a free trial of 14–30 days for customers to try your product. For more information, see [Machine learning product pricing for AWS Marketplace](machine-learning-pricing.md). 

## Requirements for usage information
<a name="ml-requirements-for-usage-information"></a>

Clear usage information that describes the expected inputs and outputs of your product (with examples) is crucial for driving a positive buyer experience. 

With each new version of your resource that you add to your product listing, you must provide usage information. 

To edit the existing usage information for a specific version, see [Updating version information](ml-manage-product-version.md#ml-updating-versions).

## Requirements for inputs and outputs
<a name="ml-requirements-for-inputs-and-outputs"></a>

A clear explanation of supported input parameters and returned output parameters with examples is important to help your buyers to understand and use your product. This understanding helps your buyers to perform any necessary transformations on the input data to get the best inference results. 

You will be prompted for the following when adding your Amazon SageMaker AI resource to your product listing.

### Inference inputs and outputs
<a name="ml-inference-inputs-and-outputs"></a>

For inference input, provide a description of the input data your product expects for both real-time endpoint and batch transform job. Include code snippets for any necessary preprocessing of the data. Include limitations, if applicable. Provide input samples hosted on [GitHub](https://github.com).

For inference output, provide a description of the output data your product returns for both real-time endpoint and batch transform job. Include limitations, if applicable. Provide output samples hosted on [GitHub](https://github.com). 

For samples, provide input files that work with your product. If your model performs multiclass classification, provide at least one sample input file for each class. 

### Training inputs
<a name="ml-training-inputs"></a>

In the **Information to train a model** section, provide the input data format and code snippets for any necessary preprocessing of the data. Include a description of values and limitations, if applicable. Provide input samples hosted on [GitHub](https://github.com). 

Explain both optional and mandatory features that can be provided by the buyer, and specify whether the `PIPE` input mode is supported. If [distributed training](https://docs.aws.amazon.com/sagemaker/latest/dg/your-algorithms-training-algo-running-container.html#your-algorithms-training-algo-running-container-dist-training) (training with more than 1 CPU/GPU instance) is supported, specify this. For tuning, list the recommend hyperparameters. 

## Requirements for Jupyter notebook
<a name="ml-requirements-for-jupyter-notebook"></a>

When adding your SageMaker AI resource to your product listing, provide a link to a sample Jupyter notebook hosted on [GitHub](https://github.com) that demonstrates the complete workflow without asking the buyer to upload or find any data. 

Use the AWS SDK for Python (Boto). A well-developed sample notebook makes it easier for buyers to try and use your listing. 

For model package products, your sample notebook demonstrates the preparation of input data, creation of an endpoint for real-time inference, and performance of batch-transform jobs. For more information, see [Model Package listing and Sample notebook](https://github.com/aws/amazon-sagemaker-examples/tree/main/aws_marketplace/curating_aws_marketplace_listing_and_sample_notebook/ModelPackage) on GitHub. For a sample notebook, see [auto\$1insurance](https://github.com/awslabs/amazon-sagemaker-examples/tree/master/aws_marketplace/using_model_packages/auto_insurance). The notebook works in all AWS Regions, without entering any parameters and without a buyer needing to locate sample data.

**Note**  
An underdeveloped sample Jupyter notebook that does not show multiple possible inputs and data preprocessing steps might make it difficult for the buyer to fully understand your product's value proposition. 

For algorithm products, the sample notebook demonstrates complete training, tuning, model creation, the creation of an endpoint for real-time inference, and the performance of batch-transform jobs. For more information, see [Algorithm listing and Sample notebook](https://github.com/awslabs/amazon-sagemaker-examples/tree/master/aws_marketplace/curating_aws_marketplace_listing_and_sample_notebook/Algorithm) on GitHub. For sample notebooks, see [amazon\$1demo\$1product](https://github.com/awslabs/amazon-sagemaker-examples/tree/master/aws_marketplace/using_algorithms/amazon_demo_product) and [automl](https://github.com/awslabs/amazon-sagemaker-examples/tree/master/aws_marketplace/using_algorithms/automl) on GitHub. These sample notebooks work in all Regions without entering any parameters and without a buyer needing to locate sample data. 

**Note**  
A lack of example training data might prevent your buyer from running the Jupyter notebook successfully. An underdeveloped sample notebook might prevent your buyers from using your product and hinder adoption. 

## Summary of requirements and recommendations for ML product listings
<a name="ml-summary-table-of-requirements-and-recommendations"></a>

The following table provides a summary of the requirements and recommendations for a machine learning product listing page.


|  **Details**  |  **For model package listings**  |  **For algorithm listings**  | 
| --- |--- |--- |
| **Product descriptions** | 
| --- |
| Explain in detail what the product does for supported content types (for example, “detects X in images").  |  Required  |  Required  | 
| Provide compelling and differentiating information about the product (avoid adjectives like "best" or unsubstantiated claims).  |  Recommended  |  Recommended  | 
| List most important use case(s) for this product.  |  Required  |  Required  | 
| Describe the data (source and size) it was trained on and list any known limitations.  |  Required  |  Not applicable | 
| Describe the core framework that the model was built on.  |  Recommended  |  Recommended  | 
| Summarize model performance metric on validation data (for example, "XX.YY percent accuracy benchmarked using the Z dataset").  |  Required  |  Not applicable | 
| Summarize model latency and/or throughput metrics on recommended instance type.  |  Required  |  Not applicable | 
| Describe the algorithm category. For example, “This decision forest regression algorithm is based on an ensemble of tree-structured classifiers that are built using the general technique of bootstrap aggregation and a random choice of features.”  |  Not applicable |  Required  | 
| **Usage information** | 
| --- |
| For inference, provide a description of the expected input format for both the real-time endpoint and batch transform job. Include limitations, if applicable. See [Requirements for inputs and outputs](#ml-requirements-for-inputs-and-outputs).  |  Required  |  Required  | 
| For inference, provide input samples for both the real-time endpoint and batch transform job. Samples must be hosted on GitHub. See [Requirements for inputs and outputs](#ml-requirements-for-inputs-and-outputs).  |  Required  |  Required  | 
| For inference, provide the name and description of each input parameter. Provide details about the its limitations and specify if it is required or optional. | Recommended | Recommended | 
| For inference, provide details about the output data your product returns for both the real-time endpoint and batch transform job. Include any limitations, if applicable. See [Requirements for inputs and outputs](#ml-requirements-for-inputs-and-outputs).  |  Required  |  Required  | 
| For inference, provide output samples for both the real-time endpoint and batch transform job. Samples must be hosted on GitHub. See [Requirements for inputs and outputs](#ml-requirements-for-inputs-and-outputs).  |  Required  |  Required  | 
| For inference, provide an example of using an endpoint or batch transform job. Include a code example using the AWS Command Line Interface (AWS CLI) commands or using an AWS SDK.  |  Required  |  Required  | 
| For inference, provide the name and description of each output parameter. Specify if it is always returned.  | Recommended | Recommended | 
| For training, provide details about necessary information to train the model such as minimum rows of data required. See [Requirements for inputs and outputs](#ml-requirements-for-inputs-and-outputs).  |  Not applicable |  Required  | 
| For training, provide input samples hosted on GitHub. See [Requirements for inputs and outputs](#ml-requirements-for-inputs-and-outputs).  |  Not applicable |  Required  | 
| For training, provide an example of performing training jobs. Describe the supported hyperparameters, their ranges, and their overall impact. Specify if the algorithm supports hyperparameter tuning, distributed training, or GPU instances. Include code example such as AWS CLI commands or using an AWS SDK, for example.  |  Not applicable |  Required  | 
| Provide a Jupyter notebook hosted on GitHub demonstrating complete use of your product. See [Requirements for Jupyter notebook](#ml-requirements-for-jupyter-notebook).  |  Required  |  Required  | 
| Provide technical information related to the usage of the product, including user manuals and sample data.  |  Recommended  |  Recommended  | 

# Troubleshooting issues with machine learning products
<a name="ml-troubleshooting"></a>

 This section provides help for some common errors that you might encounter during the publishing process for your machine learning product. If your issue isn't listed, contact the [AWS Marketplace Seller Operations](https://aws.amazon.com/marketplace/management/contact-us/) team. 

## General: I get a 400 error when I add the Amazon Resource Name (ARN) of my model package or algorithm in the AWS Marketplace Management Portal
<a name="troubleshooting_error_code_400"></a>

### Common cause
<a name="troubleshooting_common_cause"></a>

 When creating your machine learning product in SageMaker AI, you didn't choose to publish your product in AWS Marketplace. 

### Resolution
<a name="troubleshooting_resolution"></a>

 If you used the Amazon SageMaker AI console to create your resource, you must choose **Yes** on the final page of the process for **Publish this model package in AWS Marketplace** or **Yes** for **Publish this algorithm in AWS Marketplace**. You can't choose **No** and later publish it. Selecting **Yes** doesn't publish the model package or algorithm. However, it validates your model package or algorithm resource when it is created, which is necessary for use in AWS Marketplace.

 If you're using the AWS SDK to [create a model package](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateModelPackage.html#sagemaker-CreateModelPackage-request-CertifyForMarketplace) or [ create an algorithm](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateAlgorithm.html#sagemaker-CreateAlgorithm-request-CertifyForMarketplace), ensure that the parameter `CertifyForMarketplace` is set to `true`. 

After you re-create your certified and validated model package or algorithm resource, add the new ARN in the AWS Marketplace Management Portal. 

## General: I get a 404 error when I add the ARN of my model package or algorithm in the AWS Marketplace Management Portal
<a name="troubleshooting_error_code_404"></a>

### Common cause
<a name="troubleshooting_common_cause"></a>

 This error can happen for several reasons: 
+  The ARN might be invalid. 
+  The model package or algorithm resource wasn't created in the same AWS account as the seller account. 
+  The user or role that you use for publishing doesn't have the correct IAM permissions to access the model package or algorithm resource. 

### Resolution
<a name="troubleshooting_resolution"></a>

1.  Check the ARN to ensure it is the correct ARN and is in the expected format: 

    For model packages, the ARNs should look similar to `arn:aws:sagemaker:us-east-2:000123456789:model-package/my-model-package-name`. 

    For algorithms, the ARNs should look similar to `arn:aws:sagemaker:us-east-2:000123456789:algorithm/my-algorithm`. 

1.  Ensure that all resources and assets for publishing are in the seller account that you are publishing from. 

1.  Ensure that your user or role has the following permissions: 

    For model packages, the action `sagemaker:DescribeModelPackage` on the model package resource must be allowed. 

    For algorithms, the action `sagemaker:DescribeAlgorithm` on the algorithm resource must be allowed. 

## Amazon SageMaker AI: I get a “Client error: Access denied for registry” failure message when I create a model package or algorithm resource
<a name="troubleshooting_error_sm_access_denied"></a>

### Common cause
<a name="troubleshooting_common_cause"></a>

This error can happen when the image that is being used to create the model package or algorithm is stored in an [Amazon ECR](https://aws.amazon.com/ecr/) repository that belongs to another AWS account. Model package or algorithm validation does not support cross-account images.

### Resolution
<a name="troubleshooting_resolution"></a>

Copy the image to an Amazon ECR repository owned by the AWS account that you are using to publish. Then, proceed with creating the resource using the new image location.

## Amazon SageMaker AI: I get “Not Started” and “Client error: No scan scheduled...” failure messages when I create a model package or algorithm resource
<a name="troubleshooting_error_sm_failure"></a>

### Common cause
<a name="troubleshooting_common_cause"></a>

This error can happen when SageMaker AI fails to start a scan of your Docker container image stored in Amazon ECR.

### Resolution
<a name="troubleshooting_resolution"></a>

If this happens, open the [ Amazon ECR console](https://console.aws.amazon.com/ecr/repositories?region=us-east-2), find the repository where your image was uploaded to, choose the image, and then choose **Scan**.