

# Creating and using adapters
<a name="creating-and-using-adapters"></a>

Adapters are modular components that can be added to the existing Rekognition deep learning model, extending its capabilities for the tasks it’s trained on. By training a deep learning model with adapters, you can achieve better accuracy for image analysis tasks related to your specific use case. 

To create and use an adapter, you must provide training and testing data to Rekognition. You can accomplish this in one of two different ways:
+ Bulk analysis and verification - You can create a training dataset by bulk analyzing images that Rekognition will analyze and assign labels to. You can then review the generated annotations for your images and verify or correct the predictions. For more information on how the Bulk analysis of images works, see [Bulk analysis](https://docs.aws.amazon.com/rekognition/latest/dg/bulk-analysis.html).
+ Manual annotation - With this approach you create your training data by uploading and annotating images. You create your test data by either uploading and annotating images or by auto-splitting. 

Choose one of the following topics to learn more:

**Topics**
+ [Bulk analysis and verification](adapters-bulk-analysis.md)
+ [Manual annotation](adapters-manual-annotation.md)

# Bulk analysis and verification
<a name="adapters-bulk-analysis"></a>

With this approach, you upload a large number of images you want to use as training data and then use Rekognition to get predictions for these images, which automatically assigns labels to them. You can use these predictions as a starting point for your adapter. You can verify the accuracy of the predictions, and then train the adapter based on the verified predictions. This can be done with the AWS console.



 The following video demonstrates how to use Rekognition's Bulk Analysis capability to obtain and verify predictions for a large number of images, and then train an adapter with those predictions. 

[![AWS Videos](http://img.youtube.com/vi/https://www.youtube.com/embed/IGGMHPnPZLs?si=5eoidzFPbL6i5FfY/0.jpg)](http://www.youtube.com/watch?v=https://www.youtube.com/embed/IGGMHPnPZLs?si=5eoidzFPbL6i5FfY)


## Upload images for bulk analysis
<a name="adapters-bulk-analysis-upload-images"></a>

To create a training dataset for your adapter, upload images in bulk for Rekognition to predict labels for. For best results, provide as many images for training as possible up to the limit of 10000, and ensure the images are representative of all aspects of your use-case. 

When using the AWS Console you can upload images directly from your computer or provide an Amazon Simple Storage Service bucket that stores your images. However, when using the Rekognition APIs with an SDK, you must provide a manifest file that references images stored in an Amazon Simple Storage Service bucket. See [Bulk analysis](https://docs.aws.amazon.com/rekognition/latest/dg/bulk-analysis.html) for more information.

## Review predictions
<a name="adapters-bulk-analysis-review-predictions"></a>

Once you have uploaded your images to the Rekognition console, Rekognition will generate labels for them. You can then verify the predictions as one of the following categories: true positive, false positive, true negative, false negative. After you have verified the predictions you can train an adapter on your feedback.

## Train the adapter
<a name="adapters-bulk-analysis-train-adapter"></a>

Once you have finished verifying the predictions returned by bulk analysis, you can initiate the training process for your adapter. 

## Get the AdapterId
<a name="adapters-bulk-analysis-get-adapter"></a>

Once the adapter has been trained, you can get the unique ID for your adapter to use with Rekognition’s image analysis APIs.

## Call the API Operation
<a name="adapters-bulk-analysis-call-operation"></a>

To apply your custom adapter, provide its ID when calling one of the image analysis APIs that supports adapters. This enhances the accuracy of predictions for your images.

# Manual annotation
<a name="adapters-manual-annotation"></a>

With this approach, you create your training data by uploading and annotating images manually. You create your test data by either uploading and annotating test images or by auto-splitting to have Rekognition automatically use a portion of your training data as test images.

## Uploading and annotating images
<a name="adapters-upload-sample-images"></a>

To train the adapter, you’ll need to upload a set of sample images representative of your use case. For best results, provide as many images for training as possible up to the limit of 10000, and ensure the images are representative of all aspects of your use-case. 

![\[Interface showing options to import training images, with options to import a manifest file, import from S3 bucket, or upload images from computer. Includes an S3 URI field and note about ensuring read/write permissions.\]](http://docs.aws.amazon.com/rekognition/latest/dg/images/adapters-11-traiing-dataset.png)


When using the AWS Console you can upload images directly from your computer, provide a manifest file, or provide an Amazon S3 bucket that stores your images.

 However, when using the Rekognition APIs with an SDK, you must provide a manifest file that references images stored in an Amazon S3 bucket. 

You can use the [Rekognition console](https://console.aws.amazon.com/rekognition)'s annotation interface to annotate your images. Annotate your images by tagging them with labels, this establishes a "ground truth" for training. You must also designate training and testing sets, or use the auto-split feature, before you can train an adapter. When you finish designating your datasets and annotating your images, you can create an adapter based on the annotated images in your testing set. You can then evaluate the performance of your adapter. 

## Create a test set
<a name="adapters-training-testing"></a>

You will need to provide an annotated test set or use the auto-split feature. The training set is used to actually train the adapter. The adapter learns the patterns contained in these annotated images. The test set is used to evaluate the model's performance before finalizing the adapter. 

## Train the adapter
<a name="adapters-train-adapter"></a>

 Once you have finished annotating the training data, or have provided a manifest file, you can initiate the training process for your adapter. 

## Get the Adapter ID
<a name="adapter-get-adapter"></a>

Once the adapter has been trained, you can get the unique ID for your adapter to use with Rekognition's image analysis APIs.

## Call the API operation
<a name="adapter-call-operation"></a>

To apply your custom adapter, provide its ID when calling one of the image analysis APIs that supports adapters. This enhances the accuracy of predictions for your images. 