

# Ready-to-use models
<a name="canvas-ready-to-use-models"></a>

With Amazon SageMaker Canvas Ready-to-use models, you can make predictions on your data without writing a single line of code or having to build a model—all you have to bring is your data. The Ready-to-use models use pre-built models to generate predictions without requiring you to spend the time, expertise, or cost required to build a model, and you can choose from a variety of use cases ranging from language detection to expense analysis.

Canvas integrates with existing AWS services, such as [Amazon Textract](https://docs.aws.amazon.com/textract/latest/dg/what-is.html), [Amazon Rekognition](https://docs.aws.amazon.com/rekognition/latest/dg/what-is.html), and [Amazon Comprehend](https://docs.aws.amazon.com/comprehend/latest/dg/what-is.html), to analyze your data and make predictions or extract insights. You can use the predictive power of these services from within the Canvas application to get high quality predictions for your data.

Canvas supports the following Ready-to-use models types:


| Ready-to-use model | Description | Supported data type | 
| --- | --- | --- | 
| Sentiment analysis | Detect sentiment in lines of text, which can be positive, negative, neutral, or mixed. Currently, you can only do sentiment analysis for English language text. | Plain text or tabular (CSV, Parquet) | 
| Entities extraction | Extract entities, which are real-world objects such as people, places, and commercial items, or units such as dates and quantities, from text. | Plain text or tabular (CSV, Parquet) | 
| Language detection | Determine the dominant language in text such as English, French, or German. | Plain text or tabular (CSV, Parquet) | 
| Personal information detection | Detect personal information that could be used to identify an individual, such as addresses, bank account numbers, and phone numbers, from text. | Plain text or tabular (CSV, Parquet) | 
| Object detection in images | Detect objects, concepts, scenes, and actions in your images. | Image (JPG, PNG) | 
| Text detection in images | Detect text in your images. | Image (JPG, PNG) | 
| Expense analysis | Extract information from invoices and receipts, such as date, number, item prices, total amount, and payment terms. | Document (PDF, JPG, PNG, TIFF) | 
| Identity document analysis | Extract information from passports, driver licenses, and other identity documentation issued by the US Government. | Document (PDF, JPG, PNG, TIFF) | 
| Document analysis | Analyze documents and forms for relationships among detected text. | Document (PDF, JPG, PNG, TIFF) | 
| Document queries | Extract information from structured documents such as paystubs, bank statements, W-2s, and mortgage application forms by asking questions using natural language. | Document (PDF) | 

## Get started
<a name="canvas-ready-to-use-get-started"></a>

To get started with Ready-to-use models, review the following information.

**Prerequisites**

To use Ready-to-use models in Canvas, you must turn on the **Canvas Ready-to-use models configuration** permissions when [setting up your Amazon SageMaker AI domain](https://docs.aws.amazon.com/sagemaker/latest/dg/canvas-getting-started.html#canvas-prerequisites). The **Canvas Ready-to-use models configuration** attaches the [AmazonSageMakerCanvasAIServicesAccess](https://docs.aws.amazon.com/sagemaker/latest/dg/security-iam-awsmanpol-canvas.html#security-iam-awsmanpol-AmazonSageMakerCanvasAIServicesAccess) policy to your Canvas user's AWS Identity and Access Management (IAM) execution role. If you encounter any issues with granting permissions, see the topic [Troubleshooting issues with granting permissions through the SageMaker AI console](canvas-limits.md#canvas-troubleshoot-trusted-services).

If you’ve already set up your domain, you can edit your domain settings and turn on the permissions. For instructions on how to edit your domain settings, see [Edit domain settings](https://docs.aws.amazon.com/sagemaker/latest/dg/domain-edit.html). When editing the settings for your domain, go to the **Canvas settings** and turn on the **Enable Canvas Ready-to-use models** option.

**(Optional) Opt out of AI services data storage**

Certain AWS AI services store and use your data to make improvements to the service. You can opt out of having your data stored or used for service improvements. To learn more about how to opt out, see [ AI services opt-out policies](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_ai-opt-out.html) in the *AWS Organizations User Guide*.

**How to use Ready-to-use models**

To get started with Ready-to-use models, do the following:

1. **(Optional) Import your data.** You can import a tabular, image, or document dataset to generate batch predictions, or a dataset of predictions, with Ready-to-use models. To get started with importing a dataset, see [Create a data flow](canvas-data-flow.md).

1. **Generate predictions.** You can generate single or batch predictions with your chosen Ready-to-use model. To get started with making predictions, see [Make predictions for text data](canvas-ready-to-use-predict-text.md).

# Make predictions for text data
<a name="canvas-ready-to-use-predict-text"></a>

The following procedures describe how to make both single and batch predictions for text datasets. Each Ready-to-use model supports both **Single predictions** and **Batch predictions** for your dataset. A **Single prediction** is when you only need to make one prediction. For example, you have one image from which you want to extract text, or one paragraph of text for which you want to detect the dominant language. A **Batch prediction** is when you’d like to make predictions for an entire dataset. For example, you might have a CSV file of customer reviews for which you’d like to analyze the customer sentiment, or you might have image files in which you’d like to detect objects.

You can use these procedures for the following Ready-to-use model types: sentiment analysis, entities extraction, language detection, and personal information detection.

**Note**  
For sentiment analysis, you can only use English language text.

## Single predictions
<a name="canvas-ready-to-use-predict-text-single"></a>

To make a single prediction for Ready-to-use models that accept text data, do the following:

1. In the left navigation pane of the Canvas application, choose **Ready-to-use models**.

1. On the **Ready-to-use models** page, choose the Ready-to-use model for your use case. For text data, it should be one of the following: **Sentiment analysis**, **Entities extraction**, **Language detection**, or **Personal information detection**.

1. On the **Run predictions** page for your chosen Ready-to-use model, choose **Single prediction**.

1. For **Text field**, enter the text for which you’d like to get a prediction.

1. Choose **Generate prediction results** to get your prediction.

In the right pane **Prediction results**, you receive an analysis of your text in addition to a **Confidence** score for each result or label. For example, if you chose language detection and entered a passage of text in French, you might get French with a 95% confidence score and traces of other languages, like English, with a 5% confidence score.

The following screenshot shows the results for a single prediction using language detection where the model is 100% confident that the passage is English.

![\[Screenshot of the results of a single prediction with the language detection Ready-to-use model.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/studio/canvas/canvas-ready-to-use/ai-solutions-text-prediction.png)


## Batch predictions
<a name="canvas-ready-to-use-predict-text-batch"></a>

To make batch predictions for Ready-to-use models that accept text data, do the following:

1. In the left navigation pane of the Canvas application, choose **Ready-to-use models**.

1. On the **Ready-to-use models** page, choose the Ready-to-use model for your use case. For text data, it should be one of the following: **Sentiment analysis**, **Entities extraction**, **Language detection**, or **Personal information detection**.

1. On the **Run predictions** page for your chosen Ready-to-use model, choose **Batch prediction**.

1. Choose **Select dataset** if you’ve already imported your dataset. If not, choose **Import new dataset**, and then you are directed through the import data workflow.

1. From the list of available datasets, select your dataset and choose **Generate predictions** to get your predictions.

After the prediction job finishes running, on the **Run predictions** page, you see an output dataset listed under **Predictions**. This dataset contains your results, and if you select the **More options** icon (![\[Vertical ellipsis icon representing a menu or more options.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/studio/canvas/more-options-icon.png)), you can **Preview** the output data. Then, you can choose **Download** to download the results.

# Make predictions for image data
<a name="canvas-ready-to-use-predict-image"></a>

The following procedures describe how to make both single and batch predictions for image datasets. Each Ready-to-use model supports both **Single predictions** and **Batch predictions** for your dataset. A **Single prediction** is when you only need to make one prediction. For example, you have one image from which you want to extract text, or one paragraph of text for which you want to detect the dominant language. A **Batch prediction** is when you’d like to make predictions for an entire dataset. For example, you might have a CSV file of customer reviews for which you’d like to analyze the customer sentiment, or you might have image files in which you’d like to detect objects.

You can use these procedures for the following Ready-to-use model types: object detection images and text detection in images.

## Single predictions
<a name="canvas-ready-to-use-predict-image-single"></a>

To make a single prediction for Ready-to-use models that accept image data, do the following:

1. In the left navigation pane of the Canvas application, choose **Ready-to-use models**.

1. On the **Ready-to-use models** page, choose the Ready-to-use model for your use case. For image data, it should be one of the following: **Object detection images** or **Text detection in images**.

1. On the **Run predictions** page for your chosen Ready-to-use model, choose **Single prediction**.

1. Choose **Upload image**.

1. You are prompted to select an image to upload from your local computer. Select the image from your local files, and then the prediction results generate.

In the right pane **Prediction results**, you receive an analysis of your image in addition to a **Confidence** score for each object or text detected. For example, if you chose object detection in images, you receive a list of objects in the image along with a confidence score of how certain the model is that each object was accurately detected, such as 93%.

The following screenshot shows the results for a single prediction using the object detection in images solution, where the model predicts objects such as a clock tower and bus with 100% confidence.

![\[The results of a single prediction with the object detection solution in images Ready-to-use model.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/studio/canvas/canvas-ready-to-use/ai-solutions-image-prediction.png)


## Batch predictions
<a name="canvas-ready-to-use-predict-image-batch"></a>

To make batch predictions for Ready-to-use models that accept image data, do the following:

1. In the left navigation pane of the Canvas application, choose **Ready-to-use models**.

1. On the **Ready-to-use models** page, choose the Ready-to-use model for your use case. For image data, it should be one of the following: **Object detection images** or **Text detection in images**.

1. On the **Run predictions** page for your chosen Ready-to-use model, choose **Batch prediction**.

1. Choose **Select dataset** if you’ve already imported your dataset. If not, choose **Import new dataset**, and then you are directed through the import data workflow.

1. From the list of available datasets, select your dataset and choose **Generate predictions** to get your predictions.

After the prediction job finishes running, on the **Run predictions** page, you see an output dataset listed under **Predictions**. This dataset contains your results, and if you select the **More options** icon (![\[Vertical ellipsis icon representing a menu or more options.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/studio/canvas/more-options-icon.png)), you can choose **View prediction results** to preview the output data. Then, you can choose **Download prediction** and download the results as a CSV or a ZIP file.

# Make predictions for document data
<a name="canvas-ready-to-use-predict-document"></a>

The following procedures describe how to make both single and batch predictions for document datasets. Each Ready-to-use model supports both **Single predictions** and **Batch predictions** for your dataset. A **Single prediction** is when you only need to make one prediction. For example, you have one image from which you want to extract text, or one paragraph of text for which you want to detect the dominant language. A **Batch prediction** is when you’d like to make predictions for an entire dataset. For example, you might have a CSV file of customer reviews for which you’d like to analyze the customer sentiment, or you might have image files in which you’d like to detect objects.

You can use these procedures for the following Ready-to-use model types: expense analysis, identity document analysis, and document analysis.

**Note**  
For document queries, only single predictions are currently supported.

## Single predictions
<a name="canvas-ready-to-use-predict-document-single"></a>

To make a single prediction for Ready-to-use models that accept document data, do the following:

1. In the left navigation pane of the Canvas application, choose **Ready-to-use models**.

1. On the **Ready-to-use models** page, choose the Ready-to-use model for your use case. For document data, it should be one of the following: **Expense analysis**, **Identity document analysis**, or **Document analysis**.

1. On the **Run predictions** page for your chosen Ready-to-use model, choose **Single prediction**.

1. If your Ready-to-use model is identity document analysis or document analysis, complete the following actions. If you’re doing expense analysis or document queries, skip this step and go to Step 5 or Step 6, respectively.

   1. Choose **Upload document**.

   1. You are prompted to upload a PDF, JPG, or PNG file from your local computer. Select the document from your local files, and then the prediction results will generate.

1. If your Ready-to-use model is expense analysis, do the following:

   1. Choose **Upload invoice or receipt**.

   1. You are prompted to upload a PDF, JPG, PNG, or TIFF file from your local computer. Select the document from your local files, and then the prediction results will generate.

1. If your Ready-to-use model is document queries, do the following:

   1. Choose **Upload document**.

   1. You are prompted to upload a PDF file from your local computer. Select the document from your local files. Your PDF must be 1–100 pages long.
**Note**  
If you're in the Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), or Europe (Frankfurt) regions, then the maximum PDF size for document queries is 20 pages.

   1. In the right side pane, enter queries to search for information in the document. The number of characters you can have in a single query is from 1–200. You can add up to 15 queries at a time.

   1. Choose **Submit queries**, and then the results generate with answers to your queries. You are billed once for each submissions of queries you make.

In the right pane **Prediction results**, you’ll receive an analysis of your document.

The following information describes the results for each type of solution:
+ For expense analysis, the results are categorized into **Summary fields**, which include fields such as the total on a receipt, and **Line item fields**, which include fields such as individual items on a receipt. The identified fields are highlighted on the document image in the output.
+ For identity document analysis, the output shows you the fields that the Ready-to-use model identified, such as first and last name, address, or date of birth. The identified fields are highlighted on the document image in the output.
+ For document analysis, the results are categorized into **Raw text**, **Forms**, **Tables**, and **Signatures**. **Raw text** includes all of the extracted text, while **Forms**, **Tables**, and **Signatures** only include information on the form that falls into those categories. For example, **Tables** only includes information extracted from tables in the document. The identified fields are highlighted on the document image in the output.
+ For document queries, Canvas returns answers to each of your queries. You can open the collapsible query dropdown to view a result, along with a confidence score for the prediction. If Canvas finds multiple answers in the document, then you might have more than one result for each query.

The following screenshot shows the results for a single prediction using the document analysis solution.

![\[Screenshot of the results of a single prediction with the document analysis Ready-to-use model.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/studio/canvas/canvas-ready-to-use/ai-solutions-document-analysis.png)


## Batch predictions
<a name="canvas-ready-to-use-predict-document-batch"></a>

To make batch predictions for Ready-to-use models that accept document data, do the following:

1. In the left navigation pane of the Canvas application, choose **Ready-to-use models**.

1. On the **Ready-to-use models** page, choose the Ready-to-use model for your use case. For image data, it should be one of the following: **Expense analysis**, **Identity document analysis**, or **Document analysis**.

1. On the **Run predictions** page for your chosen Ready-to-use model, choose **Batch prediction**.

1. Choose **Select dataset** if you’ve already imported your dataset. If not, choose **Import new dataset**, and then you are directed through the import data workflow.

1. From the list of available datasets, select your dataset and choose **Generate predictions**. If your use case is document analysis, continue to Step 6.

1. (Optional) If your use case is Document analysis, another dialog box called **Select features to include in batch prediction** appears. You can select **Forms**, **Tables**, and **Signatures** to group the results by those features. Then, choose **Generate predictions**.

After the prediction job finishes running, on the **Run predictions** page, you see an output dataset listed under **Predictions**. This dataset contains your results, and if you select the **More options** icon (![\[Vertical ellipsis icon representing a menu or more options.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/studio/canvas/more-options-icon.png)), you can choose **View prediction results** to preview the analysis of your document data.

The following information describes the results for each type of solution:
+ For expense analysis, the results are categorized into **Summary fields**, which include fields such as the total on a receipt, and **Line item fields**, which include fields such as individual items on a receipt. The identified fields are highlighted on the document image in the output.
+ For identity document analysis, the output shows you the fields that the Ready-to-use model identified, such as first and last name, address, or date of birth. The identified fields are highlighted on the document image in the output.
+ For document analysis, the results are categorized into **Raw text**, **Forms**, **Tables**, and **Signatures**. **Raw text** includes all of the extracted text, while **Forms**, **Tables**, and **Signatures** only include information on the form that falls into those categories. For example, **Tables** only includes information extracted from tables in the document. The identified fields are highlighted on the document image in the output.

After previewing your results, you can choose **Download prediction** and download the results as a ZIP file.