

# Moderating content
<a name="moderation"></a>

You can use Amazon Rekognition to detect content that is inappropriate, unwanted, or offensive. You can use Rekognition moderation APIs in social media, broadcast media, advertising, and e-commerce situations to create a safer user experience, provide brand safety assurances to advertisers, and comply with local and global regulations.

Today, many companies rely entirely on human moderators to review third-party or user-generated content, while others simply react to user complaints to take down offensive or inappropriate images, ads, or videos. However, human moderators alone cannot scale to meet these needs at sufficient quality or speed, which leads to a poor user experience, high costs to achieve scale, or even a loss of brand reputation. By using Rekognition for image and video moderation, human moderators can review a much smaller set of content, typically 1-5% of the total volume, already flagged by machine learning. This enables them to focus on more valuable activities and still achieve comprehensive moderation coverage at a fraction of their existing cost. To set up human workforces and perform human review tasks, you can use Amazon Augmented AI, which is already integrated with Rekognition.

You can enhance the accuracy of the moderation deep learning model with the Custom Moderation feature. With Custom Moderation, you train a custom moderation adapter by uploading your images and annotating these images. The trained adapter can then be provided to the [DetectModerationLabels](https://docs.aws.amazon.com/rekognition/latest/APIReference/API_DetectModerationLabels.html) operation to to enhance its performance on your images. See [Enhancing accuracy with Custom Moderation](moderation-custom-moderation.md) for more information. 

**Labels supported by Rekognition content moderation operations**
+ To download a list of the moderation labels, click [here](samples/rekognition-moderation-labels.zip).

**Topics**
+ [

# Using the image and video moderation APIs
](moderation-api.md)
+ [

# Testing Content Moderation version 7 and transforming the API response
](moderation-response-transform.md)
+ [

# Detecting inappropriate images
](procedure-moderate-images.md)
+ [

# Detecting inappropriate stored videos
](procedure-moderate-videos.md)
+ [

# Enhancing accuracy with Custom Moderation
](moderation-custom-moderation.md)
+ [

# Reviewing inappropriate content with Amazon Augmented AI
](a2i-rekognition.md)

The following diagram shows shows the order for calling operations, depending on your goals for using the image or video components of Content Moderation: 

![\[Flow diagram depicting steps for image and video moderation.\]](http://docs.aws.amazon.com/rekognition/latest/dg/images/Moderation workflow.png)


# Using the image and video moderation APIs
<a name="moderation-api"></a>

In the Amazon Rekognition Image API, you can detect inappropriate, unwanted, or offensive content synchronously using [DetectModerationLabels](https://docs.aws.amazon.com/rekognition/latest/APIReference/API_DetectModerationLabels.html) and asynchronously using [StartMediaAnalysisJob](https://docs.aws.amazon.com/rekognition/latest/APIReference/API_StartMediaAnalysisJob.html) and [GetMediaAnalysisJob](https://docs.aws.amazon.com/rekognition/latest/APIReference/API_GetMediaAnalysisJob.html) operations. You can use the Amazon Rekognition Video API to detect such content asynchronously by using the [StartContentModeration](https://docs.aws.amazon.com/rekognition/latest/APIReference/API_StartContentModeration.html) and [GetContentModeration](https://docs.aws.amazon.com/rekognition/latest/APIReference/API_GetContentModeration.html) operations.

## Label Categories
<a name="moderation-api-categories"></a>

Amazon Rekognition uses a three-level hierarchical taxonomy to label categories of inappropriate, unwanted, or offensive content. Each label with Taxonomy Level 1 (L1) has a number of Taxonomy Level 2 labels (L2), and some Taxonomy Level 2 labels may have Taxonomy Level 3 labels (L3). This allows a hierarchical classification of the content.

For each detected moderation label, the API also returns the `TaxonomyLevel`, which contains the level (1, 2, or 3) that the label belongs to. For example, an image may be labeled in accordance with the following categorization: 

L1: Non-Explicit Nudity of Intimate parts and Kissing, L2: Non-Explicit Nudity, L3: Implied Nudity. 

**Note**  
 We recommend using L1 or L2 categories to moderate your content and using L3 categories only to remove specific concepts that you do not want to moderate (i.e. to detect content that you may not want to categorize as inappropriate, unwanted, or offensive content based on your moderation policy). 

 The following table shows the relationships between the category levels and the possible labels for each level. To download a list of the moderation labels, click [here](samples/rekognition-moderation-labels.zip).


| 
| 
| Top-Level Category (L1) | Second-Level Category (L2) | Third-Level Category (L3) | Definitions | 
| --- |--- |--- |--- |
| Explicit | Explicit Nudity | Exposed Male Genitalia | Human male genitalia, including the penis (whether erect or flaccid), the scrotum, and any discernible pubic hair. This term is applicable in contexts involving sexual activity or any visual content where male genitals are displayed either completely or partially. | 
| Exposed Female Genitalia | External parts of the female reproductive system, encompassing the vulva, vagina, and any observable pubic hair. This term is applicable in scenarios involving sexual activity or any visual content where these aspects of female anatomy are displayed either completely or partially. | 
| Exposed Buttocks or Anus | Human buttocks or anus, including instances where the buttocks are nude or when they are discernible through sheer clothing. The definition specifically applies to situations where the buttocks or anus are directly and completely visible, excluding scenarios where any form of underwear or clothing provides complete or partial coverage. | 
| Exposed Female Nipple | Human female nipples, including fully visible and partially visible aerola (area surrounding the nipples) and nipples. | 
| Explicit Sexual Activity | N/A | Depiction of actual or simulated sexual acts which encompasses human sexual intercourse, oral sex, as well as male genital stimulation and female genital stimulation by other body parts and objects. The term also includes ejaculation or vaginal fluids on body parts and erotic practices or roleplaying involving bondage, discipline, dominance and submission, and sadomasochism. | 
| Sex Toys | N/A | Objects or devices used for sexual stimulation or pleasure, e.g., dildo, vibrator, butt plug, beats, etc. | 
| Non-Explicit Nudity of Intimate parts and Kissing | Non-Explicit Nudity | Bare Back | Human posterior part where the majority of the skin is visible from the neck to the end of the spine. This term does not apply when the individual's back is partially or fully occluded. | 
| Exposed Male Nipple | Human male nipples, including partially visible nipples.  | 
| Partially Exposed Buttocks | Partially exposed human buttocks. This term includes a partially visible region of the buttocks or butt cheeks due to short clothes, or partially visible top portion of the anal cleft. The term does not apply to cases where the buttocks is fully nude. | 
| Partially Exposed Female Breast | Partially exposed human female breast where one a portion of the female's breast is visible or uncovered while not revealing the entire breast. This term applies when the region of the inner breast fold is visible or when the lower breast crease is visible with nipple fully covered or occluded. | 
| Implied Nudity | An individual who is nude, either topless or bottomless, but with intimate parts such as buttocks, nipples, or genitalia covered, occluded, or not fully visible. | 
| Obstructed Intimate Parts | Obstructed Female Nipple | Visual depiction of a situation in which a female's nipples is covered by opaque clothing or coverings, but their shapes are clearly visible.  | 
| Obstructed Male Genitalia | Visual depiction of a situation in which a male's genitalia or penis is covered by opaque clothing or coverings, but its shape is clearly visible. This term applies when the obstructed genitalia in the image is in close-up. | 
| Kissing on the Lips | N/A | Depiction of one person's lips making contact with another person's lips. | 
| Swimwear or Underwear | Female Swimwear or Underwear | N/A | Human clothing for female swimwear (e.g., one-piece swimsuits, bikinis, tankinis, etc.) and female underwear (e.g., bras, panties, briefs, lingerie, thongs, etc.) | 
| Male Swimwear or Underwear | N/A | Human clothing for male swimwear (e.g., swim trunks, boardshorts, swim briefs, etc.) and male underwear (e.g., briefs, boxers, etc.) | 
| Violence | Weapons | N/A | Instruments or devices used to cause harm or damage to living beings, structures, or systems. This includes firearms (e.g., guns, rifles, machine gunes, etc.), sharp weapons (e.g., swords, knives, etc.), explosives and ammunition (e.g., missile, bombs, bullets, etc.). | 
| Graphic Violence | Weapon Violence | The use of weapons to cause harm, damage, injury, or death to oneself, other individuals, or properties. | 
| Physical Violence | The act of causing harm to other individuals or property (e.g., hitting, fighting, pulling hair, etc.) or other act of violence involving crowd or multiple individuals. | 
| Self-Harm | The act of causing harm to oneself, often by cutting body parts such as arms or legs, where cuts are typically visible. | 
| Blood & Gore | Visual representation of violence on a person, a group of individuals, or animals, involving open wounds, bloodshed, and mutilated body parts. | 
| Explosions and Blasts | Depiction of a violent and destructive burst of intense flames with thick smoke or dust and smoke erupting from the ground. | 
| Visually Disturbing | Death and Emaciation | Emaciated Bodies | Human bodies that are extremely thin and undernourished with severe physical wasting and depletion of muscle and fat tissue. | 
| Corpses | Human corpses in the form of mutilated bodies, hanging corpses, or skeletons. | 
| Crashes | Air Crash | Incidents of air vehicles, such as airplanes, helicopters, or other flying vehicles, resulting in damage, injury, or death. This term applies when parts of the air vehicles are visible. | 
| Drugs & Tobacco | Products | Pills | Small, solid, often round or oval-shaped tables or capsules. This term applies to pills presented as standalones, in a bottle, or a transparent packet and does not apply to a visual depiction of a person taking pills. | 
| Drugs & Tobacco Paraphernalia & Use | Smoking | The act of inhaling, exhaling, and lighting up burning substances including cigarettes, cigars, e-cigarettes, hookah, or joint. | 
| Alcohol | Alcohol Use | Drinking | The act of drinking alcoholic beverages from bottles or glasses of alcohol or liquor. | 
| Alcoholic Beverages | N/A | Close up of one or multiple bottles of alcohol or liquor, glasses or mugs with alcohol or liquor, and glasses or mugs with alcohol or liquor held by an individual. This term does not apply to an individual drinking from bottles or glasses of alcohol or liquor. | 
| Rude Gestures | Middle Finger | N/A | Visual depiction of a hand gesture with middle finger is extended upward while the other fingers are folded down. | 
| Gambling | N/A | N/A | The act of participating in games of chance for a chance to win a prize in casinos, e.g., playing cards, blackjacks, roulette, slot machines at casinos, etc. | 
| Hate Symbols | Nazi Party | N/A | Visual depiction of symbols, flags, or gestures associated with Nazi Party. | 
| White Supremacy | N/A | Visual depiction of symbols or clothings associated with Ku Klux Klan (KKK) and images with confederate flags. | 
| Extremist | N/A | Images containing extremist and terrorist group flags. | 

Not every label in the L2 category has a supported label in the L3 category. Additionally, L3 labels under “Products” and “Drug and Tobacco Paraphernalia and Use” L2 labels aren’t exhaustive. These L2 labels cover concepts beyond the mentioned L3 labels and in such cases, only L2 labels is returned in the API response. 

You determine the suitability of content for your application. For example, images of a suggestive nature might be acceptable, but images containing nudity might not. To filter images, use the [ModerationLabel](https://docs.aws.amazon.com/rekognition/latest/APIReference/API_ModerationLabel.html) labels array that's returned by `DetectModerationLabels` (images) and by `GetContentModeration` (videos).

## Content type
<a name="moderation-api-content-type"></a>

The API can also identify animated or illustrated content type, and the content type is returned as part of the response: 
+ Animated content includes video game and animation (e.g., cartoon, comics, manga, anime).
+ Illustrated content includes drawing, painting, and sketches.

## Confidence
<a name="moderation-api-confidence"></a>

You can set the confidence threshold that Amazon Rekognition uses to detect inappropriate content by specifying the `MinConfidence` input parameter. Labels aren't returned for inappropriate content that is detected with a lower confidence than `MinConfidence`.

Specifying a value for `MinConfidence` that is less than 50% is likely to return a high number of false-positive results (i.e. higher recall, lower precision). On the other hand, specifying a `MinConfidence` above 50% is likely to return a lower number of false-positive results (i.e. lower recall, higher precision). If you don't specify a value for `MinConfidence`, Amazon Rekognition returns labels for inappropriate content that is detected with at least 50% confidence. 

The `ModerationLabel` array contains labels in the preceding categories, and an estimated confidence in the accuracy of the recognized content. A top-level label is returned along with any second-level labels that were identified. For example, Amazon Rekognition might return “Explicit Nudity” with a high confidence score as a top-level label. That might be enough for your filtering needs. However, if it's necessary, you can use the confidence score of a second-level label (such as "Graphic Male Nudity") to obtain more granular filtering. For an example, see [Detecting inappropriate images](procedure-moderate-images.md).

## Versioning
<a name="moderation-api-versioning"></a>

Amazon Rekognition Image and Amazon Rekognition Video both return the version of the moderation detection model that is used to detect inappropriate content (`ModerationModelVersion`). 

## Sorting and Aggregating
<a name="moderation-api-sorting-aggregating"></a>

When retrieving results with GetContentModeration, you can sort and aggregate your results. 

**Sort order** — The array of labels returned is sorted by time. To sort by label, specify `NAME` in the `SortBy`input parameter for `GetContentModeration`. If the label appears multiple times in the video, there will be multiples instances of the `ModerationLabel` element. 

**Label information** — The ModerationLabels array element contains a `ModerationLabel` object, which in turn contains the label name and the confidence Amazon Rekognition has in the accuracy of the detected label. Timestamp is the time the `ModerationLabel` was detected, defined as the number of milliseconds elapsed since the start of the video. For results aggregated by video `SEGMENTS`, the `StartTimestampMillis`, `EndTimestampMillis`, and `DurationMillis` structures are returned, which define the start time, end time, and duration of a segment respectively. 

 **Aggregation** — Specifies how results are aggregated when returned. The default is to aggregate by `TIMESTAMPS`. You can also choose to aggregate by `SEGMENTS`, which aggregates results over a time window. Only labels detected during the segments are returned. 

## Custom Moderation adapter statuses
<a name="moderation-api-statuses"></a>

Custom Moderation adapters can be in one of the following statuses: TRAINING\$1IN\$1PROGRESS, TRAINING\$1COMPLETED, TRAINING\$1FAILED, DELETING, DEPRECATED, or EXPIRED. For a full explanation of these adapter statuses, see [Managing adapters](https://docs.aws.amazon.com/rekognition/latest/dg/managing-adapters.html).

**Note**  
Amazon Rekognition isn't an authority on, and doesn't in any way claim to be an exhaustive filter of, inappropriate or offensive content. Additionally, the image and video moderation APIs don't detect whether an image includes illegal content, such as CSAM.

# Testing Content Moderation version 7 and transforming the API response
<a name="moderation-response-transform"></a>

Rekognition updated the machine learning model for the image video components of Content Moderation label detection feature from version 6.1 to 7. This update enhanced overall accuracy, and introduced several new categories along with modifying others.

If you are a current video user of version 6.1, we recommend you to take the following actions to seamlessly transition to version 7: 

1.  Download and use an AWS private SDK (see the [AWS SDK and Usage Guide for Content Moderation version 7](moderation-labels-update-sdk.md)) to call the StartContentModeration API. 

1. Review the updated list of labels and confidence scores returned in the API response or console. Adjust your application post-processing logic accordingly if necessary. 

1.  Your account will remain on version 6.1 until May 13, 2024. If you wish to use version 6.1 beyond May 13, 2024, contact the [AWS Support team](https://aws.amazon.com/support) by April 30, 2024 to request for an extension. We can extend your account to remain on version 6.1 until June 10, 2024. If we do not hear from you by April 30, 2024, your account will be automatically migrated to version 7.0 starting May 13, 2024. 

# AWS SDK and Usage Guide for Content Moderation version 7
<a name="moderation-labels-update-sdk"></a>

 Download the SDK that corresponds with your chosen development language, and consult the appropriate user guide. 




| 
| 
| Link to SDK | Installation / User Guide | 
| --- |--- |
| [Java-1.X](https://d1m67pwji3rslw.cloudfront.net/sdk/aws_rekognition_java_v1.jar) | [Guide - Java 1.pdf](https://d1m67pwji3rslw.cloudfront.net/guide/Guide_Java1.x.pdf) | 
| [Java-2.X](https://d1m67pwji3rslw.cloudfront.net/sdk/aws_rekognition_java_v2.jar) | [Guide - Java 2.pdf](https://d1m67pwji3rslw.cloudfront.net/guide/Guide_Java2.x.pdf) | 
| [JavaScript v2](https://d1m67pwji3rslw.cloudfront.net/sdk/aws_rekognition_js_v2.tgz) | [Guide - JavaScript v2.pdf](https://d1m67pwji3rslw.cloudfront.net/guide/Guide_JavaScriptV2.pdf) | 
|  [JavaScript v3](https://d1m67pwji3rslw.cloudfront.net/sdk/aws_rekognition_js_v3.zip) |  [Guide - JavaScript v3.pdf](https://d1m67pwji3rslw.cloudfront.net/guide/Guide_JavaScriptV3.pdf) | 
| [Python](https://d1m67pwji3rslw.cloudfront.net/sdk/aws_rekognition_python.zip) | [Guide - Python & AWS CLI.pdf](https://d1m67pwji3rslw.cloudfront.net/guide/Guide_Python.pdf) | 
| [Ruby](https://d1m67pwji3rslw.cloudfront.net/sdk/aws_rekognition_ruby.zip) | [Guide - RubyV3.pdf](https://d1m67pwji3rslw.cloudfront.net/guide/Guide_RubyV3.pdf) | 
| [go\$1v1](https://d1m67pwji3rslw.cloudfront.net/sdk/aws_rekognition_go_v1.zip) | [Guide - GO V1.pdf](https://d1m67pwji3rslw.cloudfront.net/guide/Guide_GoV1.pdf) | 
| [go\$1v2](https://d1m67pwji3rslw.cloudfront.net/sdk/aws_rekognition_go_v2.zip) | [Guide - GO V2.pdf](https://d1m67pwji3rslw.cloudfront.net/guide/Guide_GoV2.pdf) | 
| [DotNet](https://d1m67pwji3rslw.cloudfront.net/sdk/aws_rekognition_dotnet.zip) | [Guide - .NET.pdf](https://d1m67pwji3rslw.cloudfront.net/guide/Guide_DotNET.pdf) | 
| [php](https://d1m67pwji3rslw.cloudfront.net/sdk/aws_rekognition_php.zip) | [Guide - PHP.pdf](https://d1m67pwji3rslw.cloudfront.net/guide/Guide_PHP.pdf) | 

## Label mappings for Versions 6.1 to 7
<a name="moderation-labels-transform-schema"></a>

Content moderation version 7 added new label categories and modified previously existing label names. Reference the taxonomy table found at [Label Categories](moderation-api.md#moderation-api-categories) when deciding how to map 6.1 labels to 7 labels.

Some example label mappings are found in the following section. We recommend you review these mappings and the label definitions before making the necessary updates based on your application’s post-processing logic.

**L1 Mapping Schema**

If you use post-processing logic which filters only on the top-level category (L1) (such as `Explicit Nudity`, `Suggestive`, `Violence` etc), refer to the table below to update your code.


| 
| 
| V6.1 L1 | V7 L1 | 
| --- |--- |
| Explicit Nudity | Explicit | 
| Suggestive | Non-Explicit Nudity of Intimate parts and Kissing | 
| Swimwear or Underwear | 
| Violence | Violence | 
| Visually Disturbing | Visually Disturbing | 
| Rude Gestures | Rude Gestures | 
| Drugs | Drugs & Tobacco | 
| Tobacco | Drugs & Tobacco | 
| Alcohol | Alcohol | 
| Gambling | Gambling | 
| Hate Symbols | Hate Symbols | 

**L2 Mapping Schema**

If you use post-processing logic that filters on both L1 and L2 categories (such as `Explicit Nudity / Nudity, Suggestive / Female Swimwear Or Underwear`, `Violence / Weapon Violence` etc.), refer to the table below to update your code.


| 
| 
| V6.1 L1 | V6.1 L2 | V7 L1 | V7 L2 | V7 L3 | V7 ContentTypes | 
| --- |--- |--- |--- |--- |--- |
| Explicit Nudity | Nudity | Explicit | Explicit Nudity | Exposed Female Nipple Exposed Buttocks or Anus | 
| Graphic Male Nudity | Explicit | Explicit Nudity | Exposed Male Genitalia | 
| Graphic Female Nudity | Explicit | Explicit Nudity | Exposed Female Genitalia | 
| Sexual Activity | Explicit | Explicit Sexual Activity |  |  | 
| Illustrated Explicit Nudity | Explicit | Explicit Nudity |  | Map to "Animated" and "Illustrated" | 
| Illustrated Explicit Nudity | Explicit | Explicit Sexual Activity |  | Map to "Animated" and "Illustrated" | 
| Adult Toys | Explicit | Sex Toys |  | 
| Suggestive | Female Swimwear Or Underwear | Swimwear or Underwear | Female Swimwear or Underwear |  | 
| Male Swimwear Or Underwear | Swimwear or Underwear | Male Swimwear or Underwear |  | 
| Partial Nudity | Non-Explicit Nudity of Intimate parts and Kissing | Non-Explicit Nudity | Implied Nudity | 
| Barechested Male | Non-Explicit Nudity of Intimate parts and Kissing | Non-Explicit Nudity | Exposed Male Nipple | 
| Revealing Clothes | Non-Explicit Nudity of Intimate parts and Kissing | Non-Explicit Nudity |  | 
| Non-Explicit Nudity of Intimate parts and Kissing | Obstructed Intimate Parts |  | 
| Sexual Situations | Non-Explicit Nudity of Intimate parts and Kissing | Kissing on the Lips |  | 
| Violence | Graphic Violence Or Gore | Violence | Graphic Violence | Blood & Gore | 
| Physical Violence | Violence | Graphic Violence | Physical Violence | 
| Weapon Violence | Violence | Graphic Violence | Weapon Violence | 
| Weapons | Violence | Weapons |  | 
| Self Injury | Violence | Graphic Violence | Self-Harm | 
| Visually Disturbing | Emaciated Bodies | Visually Disturbing | Death and Emaciation | Emaciated Bodies | 
| Corpses | Visually Disturbing | Death and Emaciation | Corpses | 
| Hanging | Visually Disturbing | Death and Emaciation | Corpses | 
| Air Crash | Visually Disturbing | Crashes | Air Crash | 
| Explosions And Blasts | Violence | Graphic Violence | Explosions and Blasts | 
| Rude Gestures | Middle Finger | Rude Gestures | Middle Finger |  | 
| Drugs | Drug Products | Drugs & Tobacco | Products |  | 
| Drug Use | Drugs & Tobacco | Drugs & Tobacco Paraphernalia & Use |  | 
| Pills | Drugs & Tobacco | Products | Pills | 
| Drug Paraphernalia | Drugs & Tobacco | Drugs & Tobacco Paraphernalia & Use |  | 
| Tobacco | Tobacco Products | Drugs & Tobacco | Products |  | 
| Smoking | Drugs & Tobacco | Drugs & Tobacco Paraphernalia & Use | Smoking | 
| Alcohol | Drinking | Alcohol | Alcohol Use | Drinking | 
| Alcoholic Beverages | Alcohol | Alcoholic Beverages |  | 
| Gambling | Gambling | Gambling |  |  | 
| Hate Symbols | Nazi Party | Hate Symbols | Nazi Party |  | 
| White Supremacy | Hate Symbols | White Supremacy |  | 
| Extremist | Hate Symbols | Extremist |  | 

# Detecting inappropriate images
<a name="procedure-moderate-images"></a>

You can use the [DetectModerationLabels](https://docs.aws.amazon.com/rekognition/latest/APIReference/API_DetectModerationLabels.html) operation to determine if an image contains inappropriate or offensive content. For a list of moderation labels in Amazon Rekognition, see [Using the image and video moderation APIs](https://docs.aws.amazon.com/rekognition/latest/dg/moderation.html#moderation-api).



## Detecting inappropriate content in an image
<a name="moderate-images-sdk"></a>

The image must be in either a .jpg or a .png format. You can provide the input image as an image byte array (base64-encoded image bytes), or specify an Amazon S3 object. In these procedures, you upload an image (.jpg or .png) to your S3 bucket.

To run these procedures, you need to have the AWS CLI or the appropriate AWS SDK installed. For more information, see [Getting started with Amazon Rekognition](getting-started.md). The AWS account you use must have access permissions to the Amazon Rekognition API. For more information, see [Actions Defined by Amazon Rekognition](https://docs.aws.amazon.com/IAM/latest/UserGuide/list_amazonrekognition.html#amazonrekognition-actions-as-permissions). 

**To detect moderation labels in an image (SDK)**

1. If you haven't already:

   1. Create or update an user with `AmazonRekognitionFullAccess` and `AmazonS3ReadOnlyAccess` permissions. For more information, see [Step 1: Set up an AWS account and create a User](setting-up.md#setting-up-iam).

   1. Install and configure the AWS CLI and the AWS SDKs. For more information, see [Step 2: Set up the AWS CLI and AWS SDKs](setup-awscli-sdk.md).

1. Upload an image to your S3 bucket. 

   For instructions, see [Uploading Objects into Amazon S3](https://docs.aws.amazon.com/AmazonS3/latest/userguide/UploadingObjectsintoAmazonS3.html) in the *Amazon Simple Storage Service User Guide*.

1. Use the following examples to call the `DetectModerationLabels` operation.

------
#### [ Java ]

   This example outputs detected inappropriate content label names, confidence levels, and the parent label for detected moderation labels.

   Replace the values of `amzn-s3-demo-bucket` and `photo` with the S3 bucket name and the image file name that you used in step 2.

   ```
   //Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.
   //PDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-rekognition-developer-guide/blob/master/LICENSE-SAMPLECODE.)
   
   package aws.example.rekognition.image;
   import com.amazonaws.services.rekognition.AmazonRekognition;
   import com.amazonaws.services.rekognition.AmazonRekognitionClientBuilder;
   import com.amazonaws.services.rekognition.model.AmazonRekognitionException;
   import com.amazonaws.services.rekognition.model.DetectModerationLabelsRequest;
   import com.amazonaws.services.rekognition.model.DetectModerationLabelsResult;
   import com.amazonaws.services.rekognition.model.Image;
   import com.amazonaws.services.rekognition.model.ModerationLabel;
   import com.amazonaws.services.rekognition.model.S3Object;
   
   import java.util.List;
   
   public class DetectModerationLabels
   {
      public static void main(String[] args) throws Exception
      {
         String photo = "input.jpg";
         String bucket = "bucket";
         
         AmazonRekognition rekognitionClient = AmazonRekognitionClientBuilder.defaultClient();
         
         DetectModerationLabelsRequest request = new DetectModerationLabelsRequest()
           .withImage(new Image().withS3Object(new S3Object().withName(photo).withBucket(bucket)))
           .withMinConfidence(60F);
         try
         {
              DetectModerationLabelsResult result = rekognitionClient.detectModerationLabels(request);
              List<ModerationLabel> labels = result.getModerationLabels();
              System.out.println("Detected labels for " + photo);
              for (ModerationLabel label : labels)
              {
                 System.out.println("Label: " + label.getName()
                  + "\n Confidence: " + label.getConfidence().toString() + "%"
                  + "\n Parent:" + label.getParentName());
             }
          }
          catch (AmazonRekognitionException e)
          {
            e.printStackTrace();
          }
       }
   }
   ```

------
#### [ Java V2 ]

   This code is taken from the AWS Documentation SDK examples GitHub repository. See the full example [here](https://github.com/awsdocs/aws-doc-sdk-examples/blob/master/javav2/example_code/rekognition/src/main/java/com/example/rekognition/DetectModerationLabels.java).

   ```
   //snippet-start:[rekognition.java2.recognize_video_text.import]
   //snippet-start:[rekognition.java2.detect_mod_labels.import]
   import software.amazon.awssdk.auth.credentials.ProfileCredentialsProvider;
   import software.amazon.awssdk.core.SdkBytes;
   import software.amazon.awssdk.regions.Region;
   import software.amazon.awssdk.services.rekognition.RekognitionClient;
   import software.amazon.awssdk.services.rekognition.model.RekognitionException;
   import software.amazon.awssdk.services.rekognition.model.Image;
   import software.amazon.awssdk.services.rekognition.model.DetectModerationLabelsRequest;
   import software.amazon.awssdk.services.rekognition.model.DetectModerationLabelsResponse;
   import software.amazon.awssdk.services.rekognition.model.ModerationLabel;
   import java.io.FileInputStream;
   import java.io.FileNotFoundException;
   import java.io.InputStream;
   import java.util.List;
   //snippet-end:[rekognition.java2.detect_mod_labels.import]
   
   /**
   * Before running this Java V2 code example, set up your development environment, including your credentials.
   *
   * For more information, see the following documentation topic:
   *
   * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html
   */
   public class ModerateLabels {
   
    public static void main(String[] args) {
   
        final String usage = "\n" +
            "Usage: " +
            "   <sourceImage>\n\n" +
            "Where:\n" +
            "   sourceImage - The path to the image (for example, C:\\AWS\\pic1.png). \n\n";
   
        if (args.length < 1) {
            System.out.println(usage);
            System.exit(1);
        }
   
        String sourceImage = args[0];
        Region region = Region.US_WEST_2;
        RekognitionClient rekClient = RekognitionClient.builder()
            .region(region)
            .credentialsProvider(ProfileCredentialsProvider.create("profile-name"))
            .build();
   
        detectModLabels(rekClient, sourceImage);
        rekClient.close();
    }
   
    // snippet-start:[rekognition.java2.detect_mod_labels.main]
    public static void detectModLabels(RekognitionClient rekClient, String sourceImage) {
   
        try {
            InputStream sourceStream = new FileInputStream(sourceImage);
            SdkBytes sourceBytes = SdkBytes.fromInputStream(sourceStream);
            Image souImage = Image.builder()
                .bytes(sourceBytes)
                .build();
   
            DetectModerationLabelsRequest moderationLabelsRequest = DetectModerationLabelsRequest.builder()
                .image(souImage)
                .minConfidence(60F)
                .build();
   
            DetectModerationLabelsResponse moderationLabelsResponse = rekClient.detectModerationLabels(moderationLabelsRequest);
            List<ModerationLabel> labels = moderationLabelsResponse.moderationLabels();
            System.out.println("Detected labels for image");
   
            for (ModerationLabel label : labels) {
                System.out.println("Label: " + label.name()
                    + "\n Confidence: " + label.confidence().toString() + "%"
                    + "\n Parent:" + label.parentName());
            }
   
        } catch (RekognitionException | FileNotFoundException e) {
            e.printStackTrace();
            System.exit(1);
        }
    }
    // snippet-end:[rekognition.java2.detect_mod_labels.main]
   ```

------
#### [ AWS CLI ]

   This AWS CLI command displays the JSON output for the `detect-moderation-labels` CLI operation. 

   Replace `amzn-s3-demo-bucket` and `input.jpg` with the S3 bucket name and the image file name that you used in step 2. Replace the value of `profile_name` with the name of your developer profile. To use an adapter, provide the ARN of the project version to the `project-version` parameter.

   ```
   aws rekognition detect-moderation-labels --image "{S3Object:{Bucket:<amzn-s3-demo-bucket>,Name:<image-name>}}" \ 
   --profile profile-name \
   --project-version "ARN"
   ```

   If you are accessing the CLI on a Windows device, use double quotes instead of single quotes and escape the inner double quotes by backslash (i.e. \$1) to address any parser errors you may encounter. For an example, see the following: 

   ```
   aws rekognition detect-moderation-labels --image "{\"S3Object\":{\"Bucket\":\"amzn-s3-demo-bucket\",\"Name\":\"image-name\"}}" \
   --profile profile-name
   ```

------
#### [ Python ]

   This example outputs detected inappropriate or offensive content label names, confidence levels, and the parent label for detected inappropriate content labels.

   In the function `main`, replace the values of `amzn-s3-demo-bucket` and `photo` with the S3 bucket name and the image file name that you used in step 2. Replace the value of `profile_name` in the line that creates the Rekognition session with the name of your developer profile.

   ```
   #Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.
   #PDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-rekognition-developer-guide/blob/master/LICENSE-SAMPLECODE.)
   
   import boto3
   
   def moderate_image(photo, bucket):
       
       session = boto3.Session(profile_name='profile-name')
       client = session.client('rekognition')
   
       response = client.detect_moderation_labels(Image={'S3Object':{'Bucket':bucket,'Name':photo}})
   
       print('Detected labels for ' + photo)
       for label in response['ModerationLabels']:
           print (label['Name'] + ' : ' + str(label['Confidence']))
           print (label['ParentName'])
       return len(response['ModerationLabels'])
   
   def main():
   
       photo='image-name'
       bucket='amzn-s3-demo-bucket'
       label_count=moderate_image(photo, bucket)
       print("Labels detected: " + str(label_count))
   
   if __name__ == "__main__":
       main()
   ```

------
#### [ .NET ]

   This example outputs detected inappropriate or offensive content label names, confidence levels, and the parent label for detected moderation labels.

   Replace the values of `amzn-s3-demo-bucket` and `photo` with the S3 bucket name and the image file name that you used in step 2.

   ```
   //Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.
   //PDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-rekognition-developer-guide/blob/master/LICENSE-SAMPLECODE.)
   
   using System;
   using Amazon.Rekognition;
   using Amazon.Rekognition.Model;
   
   public class DetectModerationLabels
   {
       public static void Example()
       {
           String photo = "input.jpg";
           String bucket = "amzn-s3-demo-bucket";
   
           AmazonRekognitionClient rekognitionClient = new AmazonRekognitionClient();
   
           DetectModerationLabelsRequest detectModerationLabelsRequest = new DetectModerationLabelsRequest()
           {
               Image = new Image()
               {
                   S3Object = new S3Object()
                   {
                       Name = photo,
                       Bucket = bucket
                   },
               },
               MinConfidence = 60F
           };
   
           try
           {
               DetectModerationLabelsResponse detectModerationLabelsResponse = rekognitionClient.DetectModerationLabels(detectModerationLabelsRequest);
               Console.WriteLine("Detected labels for " + photo);
               foreach (ModerationLabel label in detectModerationLabelsResponse.ModerationLabels)
                   Console.WriteLine("Label: {0}\n Confidence: {1}\n Parent: {2}", 
                       label.Name, label.Confidence, label.ParentName);
           }
           catch (Exception e)
           {
               Console.WriteLine(e.Message);
           }
       }
   }
   ```

------

## DetectModerationLabels operation request
<a name="detectmoderation-labels-operation-request"></a>

The input to `DetectModerationLabels` is an image. In this example JSON input, the source image is loaded from an Amazon S3 bucket. `MinConfidence` is the minimum confidence that Amazon Rekognition Image must have in the accuracy of the detected label for it to be returned in the response.

```
{
    "Image": {
        "S3Object": {
            "Bucket": "amzn-s3-demo-bucket",
            "Name": "input.jpg"
        }
    },
    "MinConfidence": 60
}
```

## DetectModerationLabels operation response
<a name="detectmoderationlabels-operation-response"></a>

 `DetectModerationLabels` can retrieve input images from an S3 bucket, or you can provide them as image bytes. The following example is the response from a call to `DetectModerationLabels`.

In the following example JSON response, note the following:
+ **Inappropriate Image Detection information** – The example shows a list of labels for inappropriate or offensive content found in the image. The list includes the top-level label and each second-level label that are detected in the image.

  **Label** – Each label has a name, an estimation of the confidence that Amazon Rekognition has that the label is accurate, and the name of its parent label. The parent name for a top-level label is `""`.

  **Label confidence** – Each label has a confidence value between 0 and 100 that indicates the percentage confidence that Amazon Rekognition has that the label is correct. You specify the required confidence level for a label to be returned in the response in the API operation request.

```
{
    "ModerationLabels": [
        {
            "Confidence": 99.44782257080078,
            "Name": "Smoking",
            "ParentName": "Drugs & Tobacco Paraphernalia & Use",
            "TaxonomyLevel": 3
        },
        {
            "Confidence": 99.44782257080078,
            "Name": "Drugs & Tobacco Paraphernalia & Use",
            "ParentName": "Drugs & Tobacco",
            "TaxonomyLevel": 2
        },
        {
            "Confidence": 99.44782257080078,
            "Name": "Drugs & Tobacco",
            "ParentName": "",
            "TaxonomyLevel": 1
        }
    ],
    "ModerationModelVersion": "7.0",
    "ContentTypes": [
        {
            "Confidence": 99.9999008178711,
            "Name": "Illustrated"
        }
    ]
}
```

# Detecting inappropriate stored videos
<a name="procedure-moderate-videos"></a>

Amazon Rekognition Video inappropriate or offensive content detection in stored videos is an asynchronous operation. To start detecting inappropriate or offensive content, call [StartContentModeration](https://docs.aws.amazon.com/rekognition/latest/APIReference/API_StartContentModeration.html). Amazon Rekognition Video publishes the completion status of the video analysis to an Amazon Simple Notification Service topic. If the video analysis is successful, call [GetContentModeration](https://docs.aws.amazon.com/rekognition/latest/APIReference/API_GetContentModeration.html) to get the analysis results. For more information about starting video analysis and getting the results, see [Calling Amazon Rekognition Video operations](api-video.md). For a list of moderation labels in Amazon Rekognition, see [Using the image and video moderation APIs](https://docs.aws.amazon.com/rekognition/latest/dg/moderation.html#moderation-api).

 This procedure expands on the code in [Analyzing a video stored in an Amazon S3 bucket with Java or Python (SDK)](video-analyzing-with-sqs.md), which uses an Amazon Simple Queue Service queue to get the completion status of a video analysis request.

**To detect inappropriate or offensive content in a video stored in an Amazon S3 bucket (SDK)**

1. Perform [Analyzing a video stored in an Amazon S3 bucket with Java or Python (SDK)](video-analyzing-with-sqs.md).

1. Add the following code to the class `VideoDetect` that you created in step 1.

------
#### [ Java ]

   ```
       //Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.
       //PDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-rekognition-developer-guide/blob/master/LICENSE-SAMPLECODE.)
   
           //Content moderation ==================================================================
           private static void StartUnsafeContentDetection(String bucket, String video) throws Exception{
           
               NotificationChannel channel= new NotificationChannel()
                       .withSNSTopicArn(snsTopicArn)
                       .withRoleArn(roleArn);
               
               StartContentModerationRequest req = new StartContentModerationRequest()
                       .withVideo(new Video()
                               .withS3Object(new S3Object()
                                   .withBucket(bucket)
                                   .withName(video)))
                       .withNotificationChannel(channel);
                                    
                                    
                
                StartContentModerationResult startModerationLabelDetectionResult = rek.startContentModeration(req);
                startJobId=startModerationLabelDetectionResult.getJobId();
                
            } 
            
            private static void GetUnsafeContentDetectionResults() throws Exception{
                
                int maxResults=10;
                String paginationToken=null;
                GetContentModerationResult moderationLabelDetectionResult =null;
                
                do{
                    if (moderationLabelDetectionResult !=null){
                        paginationToken = moderationLabelDetectionResult.getNextToken();
                    }
                    
                    moderationLabelDetectionResult = rek.getContentModeration(
                            new GetContentModerationRequest()
                                .withJobId(startJobId)
                                .withNextToken(paginationToken)
                                .withSortBy(ContentModerationSortBy.TIMESTAMP)
                                .withMaxResults(maxResults));
                            
                    
           
                    VideoMetadata videoMetaData=moderationLabelDetectionResult.getVideoMetadata();
                        
                    System.out.println("Format: " + videoMetaData.getFormat());
                    System.out.println("Codec: " + videoMetaData.getCodec());
                    System.out.println("Duration: " + videoMetaData.getDurationMillis());
                    System.out.println("FrameRate: " + videoMetaData.getFrameRate());
                        
                        
                    //Show moderated content labels, confidence and detection times
                    List<ContentModerationDetection> moderationLabelsInFrames= 
                            moderationLabelDetectionResult.getModerationLabels();
                 
                    for (ContentModerationDetection label: moderationLabelsInFrames) { 
                        long seconds=label.getTimestamp()/1000;
                        System.out.print("Sec: " + Long.toString(seconds));
                        System.out.println(label.getModerationLabel().toString());
                        System.out.println();           
                    }  
                } while (moderationLabelDetectionResult !=null && moderationLabelDetectionResult.getNextToken() != null);
                
            }
   ```

   In the function `main`, replace the lines: 

   ```
           StartLabelDetection(amzn-s3-demo-bucket, video);
   
           if (GetSQSMessageSuccess()==true)
           	GetLabelDetectionResults();
   ```

   with:

   ```
           StartUnsafeContentDetection(amzn-s3-demo-bucket, video);
   
           if (GetSQSMessageSuccess()==true)
           	GetUnsafeContentDetectionResults();
   ```

------
#### [ Java V2 ]

   This code is taken from the AWS Documentation SDK examples GitHub repository. See the full example [here](https://github.com/awsdocs/aws-doc-sdk-examples/blob/master/javav2/example_code/rekognition/src/main/java/com/example/rekognition/VideoDetectInappropriate.java).

   ```
   import software.amazon.awssdk.regions.Region;
   import software.amazon.awssdk.services.rekognition.RekognitionClient;
   import software.amazon.awssdk.services.rekognition.model.NotificationChannel;
   import software.amazon.awssdk.services.rekognition.model.S3Object;
   import software.amazon.awssdk.services.rekognition.model.Video;
   import software.amazon.awssdk.services.rekognition.model.StartContentModerationRequest;
   import software.amazon.awssdk.services.rekognition.model.StartContentModerationResponse;
   import software.amazon.awssdk.services.rekognition.model.RekognitionException;
   import software.amazon.awssdk.services.rekognition.model.GetContentModerationResponse;
   import software.amazon.awssdk.services.rekognition.model.GetContentModerationRequest;
   import software.amazon.awssdk.services.rekognition.model.VideoMetadata;
   import software.amazon.awssdk.services.rekognition.model.ContentModerationDetection;
   import java.util.List;
   
   /**
    * Before running this Java V2 code example, set up your development
    * environment, including your credentials.
    *
    * For more information, see the following documentation topic:
    *
    * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html
    */
   public class VideoDetectInappropriate {
       private static String startJobId = "";
   
       public static void main(String[] args) {
   
           final String usage = """
   
                   Usage:    <bucket> <video> <topicArn> <roleArn>
   
                   Where:
                      bucket - The name of the bucket in which the video is located (for example, (for example, myBucket).\s
                      video - The name of video (for example, people.mp4).\s
                      topicArn - The ARN of the Amazon Simple Notification Service (Amazon SNS) topic.\s
                      roleArn - The ARN of the AWS Identity and Access Management (IAM) role to use.\s
                   """;
   
           if (args.length != 4) {
               System.out.println(usage);
               System.exit(1);
           }
   
           String bucket = args[0];
           String video = args[1];
           String topicArn = args[2];
           String roleArn = args[3];
           Region region = Region.US_EAST_1;
           RekognitionClient rekClient = RekognitionClient.builder()
                   .region(region)
                   .build();
   
           NotificationChannel channel = NotificationChannel.builder()
                   .snsTopicArn(topicArn)
                   .roleArn(roleArn)
                   .build();
   
           startModerationDetection(rekClient, channel, bucket, video);
           getModResults(rekClient);
           System.out.println("This example is done!");
           rekClient.close();
       }
   
       public static void startModerationDetection(RekognitionClient rekClient,
               NotificationChannel channel,
               String bucket,
               String video) {
   
           try {
               S3Object s3Obj = S3Object.builder()
                       .bucket(bucket)
                       .name(video)
                       .build();
   
               Video vidOb = Video.builder()
                       .s3Object(s3Obj)
                       .build();
   
               StartContentModerationRequest modDetectionRequest = StartContentModerationRequest.builder()
                       .jobTag("Moderation")
                       .notificationChannel(channel)
                       .video(vidOb)
                       .build();
   
               StartContentModerationResponse startModDetectionResult = rekClient
                       .startContentModeration(modDetectionRequest);
               startJobId = startModDetectionResult.jobId();
   
           } catch (RekognitionException e) {
               System.out.println(e.getMessage());
               System.exit(1);
           }
       }
   
       public static void getModResults(RekognitionClient rekClient) {
           try {
               String paginationToken = null;
               GetContentModerationResponse modDetectionResponse = null;
               boolean finished = false;
               String status;
               int yy = 0;
   
               do {
                   if (modDetectionResponse != null)
                       paginationToken = modDetectionResponse.nextToken();
   
                   GetContentModerationRequest modRequest = GetContentModerationRequest.builder()
                           .jobId(startJobId)
                           .nextToken(paginationToken)
                           .maxResults(10)
                           .build();
   
                   // Wait until the job succeeds.
                   while (!finished) {
                       modDetectionResponse = rekClient.getContentModeration(modRequest);
                       status = modDetectionResponse.jobStatusAsString();
   
                       if (status.compareTo("SUCCEEDED") == 0)
                           finished = true;
                       else {
                           System.out.println(yy + " status is: " + status);
                           Thread.sleep(1000);
                       }
                       yy++;
                   }
   
                   finished = false;
   
                   // Proceed when the job is done - otherwise VideoMetadata is null.
                   VideoMetadata videoMetaData = modDetectionResponse.videoMetadata();
                   System.out.println("Format: " + videoMetaData.format());
                   System.out.println("Codec: " + videoMetaData.codec());
                   System.out.println("Duration: " + videoMetaData.durationMillis());
                   System.out.println("FrameRate: " + videoMetaData.frameRate());
                   System.out.println("Job");
   
                   List<ContentModerationDetection> mods = modDetectionResponse.moderationLabels();
                   for (ContentModerationDetection mod : mods) {
                       long seconds = mod.timestamp() / 1000;
                       System.out.print("Mod label: " + seconds + " ");
                       System.out.println(mod.moderationLabel().toString());
                       System.out.println();
                   }
   
               } while (modDetectionResponse != null && modDetectionResponse.nextToken() != null);
   
           } catch (RekognitionException | InterruptedException e) {
               System.out.println(e.getMessage());
               System.exit(1);
           }
       }
   }
   ```

------
#### [ Python ]

   ```
   #Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.
   #PDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-rekognition-developer-guide/blob/master/LICENSE-SAMPLECODE.)
   
       # ============== Unsafe content =============== 
       def StartUnsafeContent(self):
           response=self.rek.start_content_moderation(Video={'S3Object': {'Bucket': self.bucket, 'Name': self.video}},
               NotificationChannel={'RoleArn': self.roleArn, 'SNSTopicArn': self.snsTopicArn})
   
           self.startJobId=response['JobId']
           print('Start Job Id: ' + self.startJobId)
   
       def GetUnsafeContentResults(self):
           maxResults = 10
           paginationToken = ''
           finished = False
   
           while finished == False:
               response = self.rek.get_content_moderation(JobId=self.startJobId,
                                                   MaxResults=maxResults,
                                                   NextToken=paginationToken,
                                                   SortBy="NAME",
                                                   AggregateBy="TIMESTAMPS")
   
               print('Codec: ' + response['VideoMetadata']['Codec'])
               print('Duration: ' + str(response['VideoMetadata']['DurationMillis']))
               print('Format: ' + response['VideoMetadata']['Format'])
               print('Frame rate: ' + str(response['VideoMetadata']['FrameRate']))
               print()
   
               for contentModerationDetection in response['ModerationLabels']:
                   print('Label: ' +
                       str(contentModerationDetection['ModerationLabel']['Name']))
                   print('Confidence: ' +
                       str(contentModerationDetection['ModerationLabel']['Confidence']))
                   print('Parent category: ' +
                       str(contentModerationDetection['ModerationLabel']['ParentName']))
                   print('Timestamp: ' + str(contentModerationDetection['Timestamp']))
                   print()
   
               if 'NextToken' in response:
                   paginationToken = response['NextToken']
               else:
                   finished = True
   ```

   In the function `main`, replace the lines:

   ```
       analyzer.StartLabelDetection()
       if analyzer.GetSQSMessageSuccess()==True:
           analyzer.GetLabelDetectionResults()
   ```

   with:

   ```
       analyzer.StartUnsafeContent()
       if analyzer.GetSQSMessageSuccess()==True:
           analyzer.GetUnsafeContentResults()
   ```

------
**Note**  
If you've already run a video example other than [Analyzing a video stored in an Amazon S3 bucket with Java or Python (SDK)](video-analyzing-with-sqs.md), the code to replace might be different.

1. Run the code. A list of inappropriate content labels detected in the video is shown.

## GetContentModeration operation response
<a name="getcontentmoderation-operationresponse"></a>

The response from `GetContentModeration` is an array, `ModerationLabels`, of [ContentModerationDetection](https://docs.aws.amazon.com/rekognition/latest/APIReference/API_ContentModerationDetection.html) objects. The array contains an element for each time an inappropriate content label is detected. Within a `ContentModerationDetectionObject` object, [ModerationLabel](https://docs.aws.amazon.com/rekognition/latest/APIReference/API_ModerationLabel.html) contains information for a detected item of inappropriate or offensive content. `Timestamp` is the time, in milliseconds from the start of the video, when the label was detected. The labels are organized hierarchically in the same manner as the labels detected by inappropriate content image analysis. For more information, see [Moderating content](moderation.md).

The following is an example response from `GetContentModeration`, sorted by `NAME` and aggregated by `TIMESTAMPS`.

```
{
    "JobStatus": "SUCCEEDED",
    "VideoMetadata": {
        "Codec": "h264",
        "DurationMillis": 54100,
        "Format": "QuickTime / MOV",
        "FrameRate": 30.0,
        "FrameHeight": 462,
        "FrameWidth": 884,
        "ColorRange": "LIMITED"
    },
    "ModerationLabels": [
        {
            "Timestamp": 36000,
            "ModerationLabel": {
                "Confidence": 52.451576232910156,
                "Name": "Alcohol",
                "ParentName": "",
                "TaxonomyLevel": 1
            },
            "ContentTypes": [
                {
                    "Confidence": 99.9999008178711,
                    "Name": "Animated"
                }
            ]
        },
        {
            "Timestamp": 36000,
            "ModerationLabel": {
                "Confidence": 52.451576232910156,
                "Name": "Alcoholic Beverages",
                "ParentName": "Alcohol",
                "TaxonomyLevel": 2
            },
            "ContentTypes": [
                {
                    "Confidence": 99.9999008178711,
                    "Name": "Animated"
                }
            ]
        }
    ],
    "ModerationModelVersion": "7.0",
    "JobId": "a1b2c3d4...",
    "Video": {
        "S3Object": {
            "Bucket": "amzn-s3-demo-bucket",
            "Name": "video-name.mp4"
        }
    },
    "GetRequestMetadata": {
        "SortBy": "TIMESTAMP",
        "AggregateBy": "TIMESTAMPS"
    }
}
```

 The following is an example response from `GetContentModeration`, sorted by `NAME` and aggregated by `SEGMENTS`. 

```
{
    "JobStatus": "SUCCEEDED",
    "VideoMetadata": {
        "Codec": "h264",
        "DurationMillis": 54100,
        "Format": "QuickTime / MOV",
        "FrameRate": 30.0,
        "FrameHeight": 462,
        "FrameWidth": 884,
        "ColorRange": "LIMITED"
    },
    "ModerationLabels": [
        {
            "Timestamp": 0,
            "ModerationLabel": {
                "Confidence": 0.0003000000142492354,
                "Name": "Alcohol Use",
                "ParentName": "Alcohol",
                "TaxonomyLevel": 2
            },
            "StartTimestampMillis": 0,
            "EndTimestampMillis": 29520,
            "DurationMillis": 29520,
            "ContentTypes": [
                {
                    "Confidence": 99.9999008178711,
                    "Name": "Illustrated"
                },
                {
                    "Confidence": 99.9999008178711,
                    "Name": "Animated"
                }
            ]
        }
    ],
    "ModerationModelVersion": "7.0",
    "JobId": "a1b2c3d4...",
    "Video": {
        "S3Object": {
            "Bucket": "amzn-s3-demo-bucket",
            "Name": "video-name.mp4"
        }
    },
    "GetRequestMetadata": {
        "SortBy": "TIMESTAMP",
        "AggregateBy": "SEGMENTS"
    }
}
```

# Enhancing accuracy with Custom Moderation
<a name="moderation-custom-moderation"></a>

 Amazon Rekognition’s [DetectModerationLabels](https://docs.aws.amazon.com/rekognition/latest/APIReference/API_DetectModerationLabels.html) API lets you detect content that is inappropriate, unwanted, or offensive. The Rekognition Custom Moderation feature allows you to enhance the accuracy of [DetectModerationLabels](https://docs.aws.amazon.com/rekognition/latest/APIReference/API_DetectModerationLabels.html) by using adapters. Adapters are modular components that can be added to an existing Rekognition deep learning model, extending its capabilities for the tasks it’s trained on. By creating an adapter and providing it to the [DetectModerationLabels](https://docs.aws.amazon.com/rekognition/latest/APIReference/API_DetectModerationLabels.html) operation, you can achieve better accuracy for the content moderation tasks related to your specific use case.

When customizing Rekognition’s content moderation model for specific moderation labels, you must create a project and train an adapter on a set of images you provide. You can then iteratively check the adapter’s performance and retrain the adapter to your desired level of accuracy. Projects are used to contain the different versions of adapters. 

You can use the Rekognition console to create projects and adapters. Alternatively, you can make use of an AWS SDK and the associated APIs to create a project, train an adapter, and manage your adapters. 



# Creating and using adapters
<a name="creating-and-using-adapters"></a>

Adapters are modular components that can be added to the existing Rekognition deep learning model, extending its capabilities for the tasks it’s trained on. By training a deep learning model with adapters, you can achieve better accuracy for image analysis tasks related to your specific use case. 

To create and use an adapter, you must provide training and testing data to Rekognition. You can accomplish this in one of two different ways:
+ Bulk analysis and verification - You can create a training dataset by bulk analyzing images that Rekognition will analyze and assign labels to. You can then review the generated annotations for your images and verify or correct the predictions. For more information on how the Bulk analysis of images works, see [Bulk analysis](https://docs.aws.amazon.com/rekognition/latest/dg/bulk-analysis.html).
+ Manual annotation - With this approach you create your training data by uploading and annotating images. You create your test data by either uploading and annotating images or by auto-splitting. 

Choose one of the following topics to learn more:

**Topics**
+ [

# Bulk analysis and verification
](adapters-bulk-analysis.md)
+ [

# Manual annotation
](adapters-manual-annotation.md)

# Bulk analysis and verification
<a name="adapters-bulk-analysis"></a>

With this approach, you upload a large number of images you want to use as training data and then use Rekognition to get predictions for these images, which automatically assigns labels to them. You can use these predictions as a starting point for your adapter. You can verify the accuracy of the predictions, and then train the adapter based on the verified predictions. This can be done with the AWS console.



 The following video demonstrates how to use Rekognition's Bulk Analysis capability to obtain and verify predictions for a large number of images, and then train an adapter with those predictions. 

[![AWS Videos](http://img.youtube.com/vi/https://www.youtube.com/embed/IGGMHPnPZLs?si=5eoidzFPbL6i5FfY/0.jpg)](http://www.youtube.com/watch?v=https://www.youtube.com/embed/IGGMHPnPZLs?si=5eoidzFPbL6i5FfY)


## Upload images for bulk analysis
<a name="adapters-bulk-analysis-upload-images"></a>

To create a training dataset for your adapter, upload images in bulk for Rekognition to predict labels for. For best results, provide as many images for training as possible up to the limit of 10000, and ensure the images are representative of all aspects of your use-case. 

When using the AWS Console you can upload images directly from your computer or provide an Amazon Simple Storage Service bucket that stores your images. However, when using the Rekognition APIs with an SDK, you must provide a manifest file that references images stored in an Amazon Simple Storage Service bucket. See [Bulk analysis](https://docs.aws.amazon.com/rekognition/latest/dg/bulk-analysis.html) for more information.

## Review predictions
<a name="adapters-bulk-analysis-review-predictions"></a>

Once you have uploaded your images to the Rekognition console, Rekognition will generate labels for them. You can then verify the predictions as one of the following categories: true positive, false positive, true negative, false negative. After you have verified the predictions you can train an adapter on your feedback.

## Train the adapter
<a name="adapters-bulk-analysis-train-adapter"></a>

Once you have finished verifying the predictions returned by bulk analysis, you can initiate the training process for your adapter. 

## Get the AdapterId
<a name="adapters-bulk-analysis-get-adapter"></a>

Once the adapter has been trained, you can get the unique ID for your adapter to use with Rekognition’s image analysis APIs.

## Call the API Operation
<a name="adapters-bulk-analysis-call-operation"></a>

To apply your custom adapter, provide its ID when calling one of the image analysis APIs that supports adapters. This enhances the accuracy of predictions for your images.

# Manual annotation
<a name="adapters-manual-annotation"></a>

With this approach, you create your training data by uploading and annotating images manually. You create your test data by either uploading and annotating test images or by auto-splitting to have Rekognition automatically use a portion of your training data as test images.

## Uploading and annotating images
<a name="adapters-upload-sample-images"></a>

To train the adapter, you’ll need to upload a set of sample images representative of your use case. For best results, provide as many images for training as possible up to the limit of 10000, and ensure the images are representative of all aspects of your use-case. 

![\[Interface showing options to import training images, with options to import a manifest file, import from S3 bucket, or upload images from computer. Includes an S3 URI field and note about ensuring read/write permissions.\]](http://docs.aws.amazon.com/rekognition/latest/dg/images/adapters-11-traiing-dataset.png)


When using the AWS Console you can upload images directly from your computer, provide a manifest file, or provide an Amazon S3 bucket that stores your images.

 However, when using the Rekognition APIs with an SDK, you must provide a manifest file that references images stored in an Amazon S3 bucket. 

You can use the [Rekognition console](https://console.aws.amazon.com/rekognition)'s annotation interface to annotate your images. Annotate your images by tagging them with labels, this establishes a "ground truth" for training. You must also designate training and testing sets, or use the auto-split feature, before you can train an adapter. When you finish designating your datasets and annotating your images, you can create an adapter based on the annotated images in your testing set. You can then evaluate the performance of your adapter. 

## Create a test set
<a name="adapters-training-testing"></a>

You will need to provide an annotated test set or use the auto-split feature. The training set is used to actually train the adapter. The adapter learns the patterns contained in these annotated images. The test set is used to evaluate the model's performance before finalizing the adapter. 

## Train the adapter
<a name="adapters-train-adapter"></a>

 Once you have finished annotating the training data, or have provided a manifest file, you can initiate the training process for your adapter. 

## Get the Adapter ID
<a name="adapter-get-adapter"></a>

Once the adapter has been trained, you can get the unique ID for your adapter to use with Rekognition's image analysis APIs.

## Call the API operation
<a name="adapter-call-operation"></a>

To apply your custom adapter, provide its ID when calling one of the image analysis APIs that supports adapters. This enhances the accuracy of predictions for your images. 

# Preparing your datasets
<a name="preparing-datasets-adapters"></a>

Creating an adapter requires you to provide Rekognition with two datasets, a training dataset and a testing dataset. Each dataset is comprised of provide two elements: images and annotations/labels. The following sections explain what labels and images are used for and how they come together to create datasets. 

## Images
<a name="preparing-datasets-adapters-images"></a>

You will need to train an adapter on representative samples of your images. When you select images for training, try to include at least a few images that demonstrate the expected response for each of the labels you are targeting with your adapter. 

To create a training dataset, you need to provide one of the following two image types:
+ Images with False Positive predictions. For example, when a base model predicts that an image has alcohol, but it doesn't.
+ Images with False Negative predictions. For example, when a base model predicts that an image doesn't have alcohol, but it does. 

To create a balanced dataset, it is recommended that you provide one of the following two image types:
+ Images with True Positive predictions. For example, when a base model correctly predicts that an image has alcohol. It is recommended to provide these images if you provide False Positive images.
+ Images with True Negative predictions. For example, when a base model correctly predicts that an image doesn't have alcohol. It is recommended to provide these images if you provide False Negative images.

## Labels
<a name="preparing-datasets-adapters-labels"></a>

A label refers to any of the following: objects, events, concepts or activities. For Content Moderation, a label is an instance of content that is inappropriate, unwanted, or offensive. 

In the context of creating an adapter by training Rekognition’s base model, when a label is assigned to an image it’s called an annotation. When training an adapter with the Rekognition Console, you’ll use the Console to add annotations to your images by choosing a label and then tagging images that corresponds with the label. Through this process, the model learns to identify elements of your images based on the assigned label. This linking process allows the model to focus on the most relevant content when an adapter is created, leading to improved accuracy for image analysis. 

Alternatively, you can provide a manifest files, which contains information on images and the annotations that go with them.

## Training and testing datasets
<a name="preparing-datasets-adapters-datasets"></a>

The training dataset is the basis for fine-tuning the model and creating a custom adapter. You must provide an annotated training dataset for the model to learn from. The model learns from this dataset to improve its performance on the type of images you provide. 

 To improve accuracy, you must create your training dataset by annotation/labeling images. You can accomplish this in two ways: 
+  Manual label assignment - You can use the Rekognition Console to create a training dataset by uploading the images you want your dataset to contain and then manually assign labels to these images.
+  Manifest file — You can use a manifest file to train your adapter. The manifest file contains information on the ground-truth annotations for your training and testing images, as well as the location of your training images. You can provide the manifest file when training an adapter using the Rekognition APIs or when using the AWS Console. 

The testing dataset is used to evaluate the adapter’s performance after training. To ensure reliable evaluation, the testing dataset is created by using a slice of the original training dataset that the model hasn’t seen before. This process ensures that the adapter’s performance is assessed with new data, creating accurate measurements and metrics. For optimal accuracy improvements see [Best practices for training adapters](using-adapters-best-practices.md) .

# Managing adapters with the AWS CLI and SDKs
<a name="managing-adapters"></a>

 Rekognition lets you make use of multiple features that leverage pre-trained computer vision models. With these models you can carry out tasks like label detection and content moderation. You can also customize these certain models using an adapter. 

You can make use of Rekognition’s project creation and project management APIs (like [CreateProject](https://docs.aws.amazon.com/rekognition/latest/APIReference/API_CreateProject.html) and [CreateProjectVersion](https://docs.aws.amazon.com/rekognition/latest/APIReference/API_CreateProjectVersion.html)) to create and train adapters. The following pages describe how to use the API operations to create, train, and manage your adapters, using the AWS Console, your chosen AWS SDK, or the AWS CLI. 

After you train an adapter you can use it when running inference with supported features. Currently, adapters are supported when using the Content Moderation feature.

When you train an adapter using an AWS SDK you must provide your ground-truth labels (image annotations) in the form of a manifest file. Alternatively, you can use the Rekognition Console to create and train an adapter.

**Note**  
 Adapters cannot be copied. Only Rekognition Custom Labels project versions can be copied. 

**Topics**
+ [

## Adapter statuses
](#managing-adapters-project-versions-statuses)
+ [

# Creating a project
](managing-adapters-create-project.md)
+ [

# Describing projects
](managing-adapters-describe-projects.md)
+ [

# Deleting a project
](managing-adapters-delete-project.md)
+ [

# Creating a project version
](managing-adapters-create-project-version.md)
+ [

# Describing a project version
](managing-adapters-describe-project.md)
+ [

# Deleting a project version
](managing-adapters-delete-project-version.md)

## Adapter statuses
<a name="managing-adapters-project-versions-statuses"></a>

Custom Moderation adapter (project versions) can be in one of the following statuses: 
+ TRAINING\$1IN\$1PROGRESS - The adapter is in the process of training on the files you provided as training documents.
+ TRAINING\$1COMPLETED - The adapter has successfully completed training and is ready for you to review its performance. 
+ TRAINING\$1FAILED - The adapter has failed to complete its training for some reason, review the output manifest file and output manifest summary for information on the cause of the failure.
+ DELETING - The adapter is in the process of being deleted.
+ DEPRECATED - The adapter was trained on an older version of the Content Moderation base model. It is in a grace period and it will expire within 60 to 90 days of the release of the new base model version. During the grace period, you can still use the adapter for inference with [DetectModerationLabels](https://docs.aws.amazon.com/rekognition/latest/APIReference/API_DetectModerationLabels.html) or [StartMediaAnalysisJob](https://docs.aws.amazon.com/rekognition/latest/APIReference/API_StartMediaAnalysisJob.html) API operations. Refer to Custom Moderation Console for the expiry date of your adapters.
+ EXPIRED - The adapter was trained on an older version of the Content Moderation base model and it can no longer be used to obtain custom results with the DetectModerationLabels or StartMediaAnalysisJob API operations. If an Expired adapter is specified in an inference request, it will be ignored and the response is returned from the most recent version of the Custom Moderation base model instead. 

# Creating a project
<a name="managing-adapters-create-project"></a>

With the [CreateProject](https://docs.aws.amazon.com/rekognition/latest/APIReference/API_CreateProject.html) operation you can create a project that will hold an adapter for Rekognition’s label detection operations. A project is a group of resources and in the case of label detection operations like DetectModerationLabels, a project allows you to store adapters that you can use to customize the base Rekognition model. When invoking CreateProject, you provide the name of the project you want to create to the ProjectName argument. 

 To create a project with the AWS console: 
+ Sign into the Rekognition Console
+ Click on **Custom Moderation**
+ Choose **Create Project**
+ Select either **Create a New Project** or **Add to an existing project**
+ Add a **Project name**
+ Add an **Adapter name**
+ Add a description if desired
+ Choose how you want to import your training images: Manifest file, from S3 bucket, or from your computer
+ Choose if you want to Autosplit your training data or import a manifest file
+ Select whether or not you want the project to automatically update
+ Click **Create project**

To create a project with the AWS CLI and SDK:

1. If you haven't already done so, install and configure the AWS CLI and the AWS SDKs. For more information, see [Step 2: Set up the AWS CLI and AWS SDKs](setup-awscli-sdk.md) .

1. Use the following code to create a project:

------
#### [ CLI ]

```
# Request
# Creating Content Moderation Project
aws rekognition create-project \
    --project-name "project-name" \
    --feature CONTENT_MODERATION \
    --auto-update ENABLED
    --profile profile-name
```

------

# Describing projects
<a name="managing-adapters-describe-projects"></a>

You can use the [DescribeProjects](https://docs.aws.amazon.com/rekognition/latest/APIReference/API_DescribeProjects.html) API to get information about your projects, including information about all the adapters associated with a project. 

To describe projects with the AWS CLI and SDK:

1. If you haven't already done so, install and configure the AWS CLI and the AWS SDKs. For more information, see [Step 2: Set up the AWS CLI and AWS SDKs](setup-awscli-sdk.md) .

1. Use the following code to describe a project:

------
#### [ CLI ]

```
# Request
# Getting CONTENT_MODERATION project details 
aws rekognition describe-projects \
    --features CONTENT_MODERATION
    --profile profile-name
```

------

# Deleting a project
<a name="managing-adapters-delete-project"></a>

You can delete a project by using the Rekognition console or by calling the [DeleteProject](https://docs.aws.amazon.com/rekognition/latest/APIReference/API_DeleteProject.html) API. To delete a project, you must first delete each of the associated adapters. A deleted project or model can't be undeleted. 

 To delete a project with the AWS console: 
+ Sign into the Rekognition Console.
+ Click on **Custom Moderation**.
+ You must delete each adapter associated with your project before you can delete the project itself. Delete any adapters associated with the project by selecting the adapter and then selecting **Delete**.
+ Select the project and then select the **Delete** button.

To delete a project with the AWS CLI and SDK:

1. If you haven't already done so, install and configure the AWS CLI and the AWS SDKs. For more information, see [Step 2: Set up the AWS CLI and AWS SDKs](setup-awscli-sdk.md) .

1. Use the following code to delete a project: 

------
#### [ CLI ]

```
aws rekognition delete-project 
  --project-arn project_arn \
  --profile profile-name
```

------

# Creating a project version
<a name="managing-adapters-create-project-version"></a>

You can train an adapter for deployment by using the [CreateProjectVersion](https://docs.aws.amazon.com/rekognition/latest/APIReference/API_CreateProjectVersion.html) operation. CreateProjectVersion first creates a new version of an adapter associated with a project and then begins training the adapter. The response from CreateProjectVersion is an Amazon Resource Name (ARN) for the version of the model. Training takes a while to complete. You can get the current status by calling DescribeProjectVersions. When training a model, Rekognition uses the training and test datasets associated with the project. You create datasets using the console. For more information, see the section on datasets. 

 To create a project version with the Rekognition console: 
+  Sign into the AWS Rekognition Console 
+  Click on Custom Moderation 
+  Select a project. 
+  On the “Project detail” page, choose **Create adapter** 
+  On the “Create a project" page, fill in the required details for Project Details, Training images, and Testing images, then select **Create project** . 
+  On the “Assign labels to images” page, add labels to your images and when finished select ** Start training ** 

To create a project version with the AWS CLI and SDK:

1. If you haven't already done so, install and configure the AWS CLI and the AWS SDKs. For more information, see [Step 2: Set up the AWS CLI and AWS SDKs](setup-awscli-sdk.md) .

1. Use the following code to create a project version: 

------
#### [ CLI ]

```
# Request
aws rekognition create-project-version \
 --project-arn project-arn \
 --training-data '{Assets=[GroundTruthManifest={S3Object="amzn-s3-demo-source-bucket",Name="manifest.json"}]}' \
 --output-config S3Bucket=amzn-s3-demo-destination-bucket,S3KeyPrefix=my-results \
 --feature-config "ContentModeration={ConfidenceThreshold=70}"
 --profile profile-name
```

------

# Describing a project version
<a name="managing-adapters-describe-project"></a>

You can list and describe adapters associated with a project by using the [DescribeProjectVersions](https://docs.aws.amazon.com/rekognition/latest/APIReference/API_DescribeProjectVersions.html) operation. You can specify up to 10 model versions in ProjectVersionArns. If you don't specify a value, descriptions for all model versions in the project are returned. 

To describe a project version with the AWS CLI and SDK:

1. If you haven't already done so, install and configure the AWS CLI and the AWS SDKs. For more information, see [Step 2: Set up the AWS CLI and AWS SDKs](setup-awscli-sdk.md) .

1. Use the following code to describe a project version:

------
#### [ CLI ]

```
aws rekognition describe-project-versions 
  --project-arn project_arn \
  --version-names [versions]
```

------

# Deleting a project version
<a name="managing-adapters-delete-project-version"></a>

You can delete an Rekognition adapter associated with a project using the [DeleteProjectVersion](https://docs.aws.amazon.com/rekognition/latest/APIReference/API_DeleteProjectVersion.html) operation. You can't delete an adapter if it's running or if it's training. To check the status of an adapter, call the DescribeProjectVersions operation and check the Status field returned by it. To stop a running adapter call StopProjectVersion. If the model is training, wait until it finishes training to delete it. You must delete each adapter associated with your project before you can delete the project itself.

 To delete a project version with the Rekognition console: 
+ Sign into the Rekognition Console
+ Click on Custom Moderation
+ From the Projects tab you can see all your projects and associated adapters. Select an adapter and then select **Delete**.

To delete a project version with the AWS CLI and SDK:

1. If you haven't already done so, install and configure the AWS CLI and the AWS SDKs. For more information, see [Step 2: Set up the AWS CLI and AWS SDKs](setup-awscli-sdk.md) .

1. Use the following code to delete a project version:

------
#### [ CLI ]

```
# Request
aws rekognition delete-project-version 
  --project-version-arn model_arn \
  --profile profile-name
```

------

# Custom Moderation adapter tutorial
<a name="using-adapters-tutorial"></a>

This tutorial shows you how to create, train, evaluate, use, and manage adapters using the Rekognition Console. To create, use, and manage adapters with the AWS SDK, see [Managing adapters with the AWS CLI and SDKs](managing-adapters.md).

Adapters let you enhance the accuracy of Rekognition’s API operations, customizing the model’s behavior to fit your own needs and use cases. After you create an adapter with this tutorial, you’ll be able to use it when analyzing your own images with operations like [DetectModerationLabels](https://docs.aws.amazon.com/rekognition/latest/APIReference/API_DetectModerationLabels.html), as well as retrain the adapter for further, future improvements. 

In this tutorial you’ll learn how to:
+ Create a project using Rekognition Console
+ Annotate your training data
+ Train your adapter on your training dataset
+ Review your adapter’s performance
+ Use your adapter for image analysis

## Prerequisites
<a name="using-adapters-tutorial-prereqs"></a>

Before completing this tutorial it’s recommended that you read through [Creating and using adapters](creating-and-using-adapters.md).

To create an adapter, you can use the Rekognition Console tool to create a project, upload and annotate your own images, and then train an adapter on these images. See [Creating a project and training an adapter](#using-adapters-tutorial-annotation) to get started.

Alternatively, you can use Rekognition’s Console or API to retrieve predictions for images and then verify the predictions before training an adapter on these predictions. See [Bulk analysis, prediction verification, and training an adapter](#using-adapters-tuorial-annotation-bulk-analysis) to get started.

## Image annotation
<a name="using-adapters-tutorial-image-annotation"></a>

You can annotate images yourself by labeling images with the Rekognition console, or use Rekognition Bulk analysis to annotate images which you can then verify have been correctly labeled. Choose one of the topics below to get started.

**Topics**
+ [

### Creating a project and training an adapter
](#using-adapters-tutorial-annotation)
+ [

### Bulk analysis, prediction verification, and training an adapter
](#using-adapters-tuorial-annotation-bulk-analysis)

### Creating a project and training an adapter
<a name="using-adapters-tutorial-annotation"></a>

Complete the following steps to train your adapter by annotating images using the Rekognition console.

**Create a project**

Before you can train or use an adapter you must create the project that will contain it. You must also provide the images used to train your adapter. To create a project, an adapter, and your image datasets: 

1. Sign in to the AWS Management Console and open the Rekognition console at https://console.aws.amazon.com/rekognition/.

1. In the left pane, choose **Custom Moderation.** The Rekognition Custom Moderation landing page is shown.  
![\[Rekognition Custom Moderation interface showing no existing fine-tuned adapters and options to create a new project or search.\]](http://docs.aws.amazon.com/rekognition/latest/dg/images/adapters-1-landing-page.png)

1. The Custom Moderation landing page shows you a list of all your projects and adapters, and there is also a button to create an adapter. Choose **Create project** to create a new project and adapter.

1. If this is your first time creating an adapter you will be prompted to create an Amazon S3 bucket to store files related to your project and your adapter. Choose **Create Amazon S3 bucket**.

1. On the following page, fill in the **adapter name** and the **project name**. Provide an adapter description if you wish.   
![\[Form to enter project details including a project name, adapter name, and optional adapter description. Options to import training image dataset from a manifest file or S3 bucket.\]](http://docs.aws.amazon.com/rekognition/latest/dg/images/adapters-2-project-details.png)

1. In this step, you’ll also provide the images for your adapter. You can select: **Import images from your computer**, **Import manifest file**, or **Import images from Amazon S3 bucket**. If you choose to import your images from an Amazon S3 bucket, provide the path to the bucket and folder that contains your training images. If you upload your images directly from your computer, note that you can only upload up to 30 images at one time. If you are using a manifest file that contains annotations, you can skip the steps listed below covering image annotation and proceed to the section on [Reviewing adapter performance](#using-adapters-tutorial-performance).

1. In the **Test dataset details** section, choose **Autosplit** to have Rekognition automatically select the appropriate percentage of your images as testing data, or you can choose **Manually import manifest file**.

1. After filling in this information, select **Create Project**.

**Train an adapter**

To train an adapter on your own un-annotated images:

1. Select the project that contains your adapter and then choose the option to **Assign label to images**. 

1. On the **Assign label to images** page, you can see all the images that have been uploaded as training images. You can filter these images by both labeled/unlabeled status and by label category using the two attribute selection panels on the left. You can add additional images to your training dataset by selecting the **Add images** button.  
![\[Image labeling interface with instructions, adapter details, and empty image panel.\]](http://docs.aws.amazon.com/rekognition/latest/dg/images/adapters-4-assign-labels-to-images.png)

1. After adding images to the training dataset, you must annotate your images with labels. After uploading your images, the "Assign labels to images" page will update to show the images you’ve uploaded. You will be prompted to select the label appropriate for you images from a drop-down list of labels supported by Rekognition Moderation. You can select more than one label. 

1. Continue this process until you have added labels to each of the images in your training data.

1. After you have labeled all your data, select **Start training** to start training the model, which creates your adapter.  
![\[Interface showing 2 images with options to assign labels for categories like explicit nudity, suggestive content, violence, hate symbols, alcohol, drugs, tobacco, etc.\]](http://docs.aws.amazon.com/rekognition/latest/dg/images/adapters-5-labels-images-blurred.png)

1. Before you start the training process you can add any **Tags** to the adapter that you’d like to. You can also provide the adapter with a custom encryption key or use an AWS KMS key. Once you have finished adding any tags you want and customizing the encryption to your liking, select **Train adapter** to start the training process for your adapter. 

1. Wait for your adapter to finish training. Once training has completed, you’ll receive a notification that your adapter has finished being created.

Once the status of your adapter is “Training completed” you can review your adapter’s metrics

### Bulk analysis, prediction verification, and training an adapter
<a name="using-adapters-tuorial-annotation-bulk-analysis"></a>

Complete the following steps to train your adapter by verifying bulk analysis predictions from Rekognition’s Content Moderation model.

 To train an adapter by verifying predictions from Rekognition’s Content Moderation model, you must: 

1.  Carry out Bulk analysis on your images 

1.  Verify the predictions returned for your images 

You can obtain predictions for images by carrying out Bulk analysis with Rekognition’s base model or an adapter you have already created. 

**Run bulk analysis on your images**

To train an adapter on predictions you have verified, you must first start a Bulk analysis job to analyze a batch of images using Rekognition’s base model or an adapter of your choice. To run a Bulk analysis job: 

1. Sign in to the AWS Management Console and open the Amazon Rekognition console at [https://console.aws.amazon.com/rekognition/](https://console.aws.amazon.com/rekognition/).

1. In the left pane, choose **Bulk analysis**. The Bulk analysis landing page appears. Choose **Start Bulk Analysis**. The Bulk Analysis feature overview showing steps to upload images, wait for analysis, review results, and optionally verify model predictions. Lists recent Bulk Analysis jobs for Content Moderation using the base model.  
![\[Bulk Analysis feature overview showing the workflow and listing recent Bulk Analysis jobs for Content Moderation using the base model.\]](http://docs.aws.amazon.com/rekognition/latest/dg/images/BA-1-create-bulk-analysis.png)

1. If this is your first time creating an adapter you will be prompted to create an Amazon Simple Storage Service bucket to store files related to your project and your adapter. Choose **Create Amazon S3 bucket**.

1. Select the adapter you want to use for the bulk analysis by using the **Choose an adapter** drop-down menu. If no adapter is chosen the base model will be used by default. For the purposes of this tutorial do not choose an adapter.  
![\[Bulk Analysis interface with dropdown menus to choose a Rekognition feature, adapter, set a job name and minimum confidence threshold for labels. Some fields are required.\]](http://docs.aws.amazon.com/rekognition/latest/dg/images/BA-2-bulk-analysis-job.png)

1.  In the **Bulk analysis job name** field, fill in the bulk analysis job name. 

1. Choose a value for the **Minimum confidence threshold**. Label predictions with less than your chosen confidence threshold won’t be returned. Note that when you’re evaluating the model’s performance later on, you won’t be able to adjust the confidence threshold below your chosen minimum confidence threshold.

1. In this step, you’ll also provide the images you want to analyze with Bulk analysis. These images may also be used to train your adapter. You can choose **Upload images from your computer** or ** Import images from Amazon S3 bucket**. If you choose to import your documents from an Amazon S3 bucket, provide the path to the bucket and folder that contains your training images. If you upload your documents directly from your computer, note that you can only upload 50 images at one time.

1. After filling in this information, choose **Start analysis**. This will start the analysis process using Rekognition’s base model.

1.  You can check the status of your Bulk analysis job by checking the Bulk Analysis status of the job on the main Bulk Analysis page. When the Bulk Analysis status becomes “Succeeded”, the results of the analysis are ready for review.   
![\[Bulk Analysis jobs table showing a job named "Evaluation 01" with status "Succeeded", using Content moderation Recognition API and Base model.\]](http://docs.aws.amazon.com/rekognition/latest/dg/images/BA-3-bulk-analysis-status.png)

1.  Choose the analysis you created from the list of **Bulk Analysis jobs**. 

1. On the Bulk Analysis details page you can see the predictions that Rekognition’s base model has made for the images you uploaded. 

1. Review the base model’s performance. You can change the confidence threshold that your adapter must have to assign a label to an image by using the Confidence threshold slider. The number of Flagged and Unflagged instances will change as you adjust the confidence threshold. The Label Categories panel displays the top-level categories that Rekognition recognizes, and you can select a category in this list to display any images that have been assigned that label.   
![\[The Bulk Analysis bar chart showing count of images flagged for various labels.\]](http://docs.aws.amazon.com/rekognition/latest/dg/images/BA-4-bulk-analysis-complete.png)

**Verify predictions**

If you have reviewed the accuracy of Rekognition’s base model or a chosen adapter, and want to improve this accuracy, you can utilize the verification workflow: 

1. After you are finished reviewing the base model performance, you will want to verify the predictions. Correcting the predictions will allow you to train an adapter. Choose **Verify predictions** from the top of the Bulk analysis page.  
![\[A panel prompting you to verify predictions to calculate false positive and negative rates, or train a custom moderation adapter for enhanced accuracy.\]](http://docs.aws.amazon.com/rekognition/latest/dg/images/BA-6-start-verification.png)

1. On the Verify predictions page, you can see all the images that you provided to Rekognition’s base model, or a chosen adapter, along with the predicted label for each image. You must verify each prediction as correct or incorrect using the buttons under the image. Use the “X” button to mark a prediction as incorrect and the check-mark button to mark a prediction as correct. To train an adapter you will need to verify at least 20 false-positive predictions and 50 false negative predictions for a given label. The more predictions you verify, the better the adapter’s performance will be.   
![\[Three images depicting people holding alcoholic beverages, used to illustrate the "Alcohol" category prediction for image labels.\]](http://docs.aws.amazon.com/rekognition/latest/dg/images/BA-7-verify-predictions-1.png)

   After you verify a prediction, the text below the image will change to show you the type of prediction you have verified. Once you have verified an image you can also add additional labels to the image using the **Assign labels to image** menu. You can see which images are flagged or unflagged by the model for your chosen confidence threshold or filter images by category.   
![\[Image showing three examples of content moderation for alcohol, as well as a menu to apply labels.\]](http://docs.aws.amazon.com/rekognition/latest/dg/images/BA-8-verify-predictions-2.png)

1. Once you have finished verifying all the predictions you want to verify, you can see statistics regarding your verified predictions in the **Per label performance** section of the Verification page. You can also return to the Bulk analysis details page to view these statistics.  
![\[Content Moderation verification page showing false positive rates for Explicit Nudity, Suggestive, and Alcohol labels at 50% confidence threshold.\]](http://docs.aws.amazon.com/rekognition/latest/dg/images/BA-8.5-predictions-stats.png)

1. When you are satisfied with the statistics regarding the **Per label performance**, go to the **Verify predictions** page again then select the **Train an adapter** button to begin training your adapter.  
![\[Verify predictions page showing job details including name, creation date, model version, input and output locations. Train an adapter button is present.\]](http://docs.aws.amazon.com/rekognition/latest/dg/images/BA-9-train-adapter.png)

1. On the Train an adapter page you will be prompted to create a project or choose an existing project. Name the project and the adapter that will be contained by the project. You must also specify the source of your test images. When specifying the images you can choose Autosplit to have Rekognition automatically use a portion of your training data as test images, or you can manually specify a manifest file. It’s recommended to choose Autosplit.   
![\[Interface for creating a new adapter project with fields to enter project name, adapter name, adapter description, specify test data source, and either autosplit data or import a manifest file.\]](http://docs.aws.amazon.com/rekognition/latest/dg/images/BA-10-train-adapter-project.png)

1. Specify any tags you desire, as well a AWS KMS key if you don’t want to use the default AWS key. It is recommended to leave **Auto-update **enabled. 

1. Choose **Train adapter**.  
![\[Configuration settings for an adapter, including options for adding tags, data encryption, confidence threshold, and auto-update. The adapter can be trained from this interface.\]](http://docs.aws.amazon.com/rekognition/latest/dg/images/BA-11-train-adapter.png)

1. Once the status of your adapter on the Custom Moderation landing page has become "Training complete", you can review your adapter’s performance. See [Reviewing adapter performance](#using-adapters-tutorial-performance) for more information.

## Reviewing adapter performance
<a name="using-adapters-tutorial-performance"></a>

To review your adapter performance:

1. When using the console, you’ll be able to see the status of any adapters associated with a project under the Projects tab on the Custom Moderation landing page. Navigate to the Custom Moderation landing page.  
![\[Custom Moderation landing page showing a list of moderation projects with details like status, adapter ID, input data location, base model version, date created, and status messages. Projects can be created, deleted, or resumed.\]](http://docs.aws.amazon.com/rekognition/latest/dg/images/adapters-7-status-alt.png)

1. Select the adapter you want to review from this list. On the following Adapter details page, you can see a variety of metrics for the adapter.  
![\[Adapter performance metrics showing 25% false positive improvement and 24% false negative reduction for different label categories like Suggestive and Alcohol, with data on ground truth true positives, base model, and adapter false negatives.\]](http://docs.aws.amazon.com/rekognition/latest/dg/images/adapters-8.5-new-performance-review.png)

1. With the **Threshold** panel you can change the minimum confidence threshold that your adapter must have to assign a label to an image. The number of Flagged and Unflagged instances will change as you adjust the confidence threshold. You can also filter by label category to see metrics for the categories you have selected. Set your chosen threshold.

1. You can assess the performance of your adapter on your test data by examining the metrics in the Adapter Performance panel. These metrics are calculated by comparing the adapter's extractions to the "ground truth" annotations on the test set. 

The adapter performance panel shows the False Positive Improvement and False Negative Improvement rates for the adapter that you created. The Per Label Performance tab can be used to compare the adapter and base model performance on each label category. It shows counts of false positive and false negative predictions by both the base model and the adapter, stratified by label category. By reviewing these metrics you can determine where the adapter needs improvement. For more information on these metrics, see [Evaluating and improving your adapter](using-adapters-evaluating-improving.md). 

To improve the performance, you can collect more training images and then create a new adapter based inside of the project. Simply return to the Custom Moderation landing page and create a new adapter inside of your project, providing more training images for the adapter to be trained on. This time choose the **Add to an existing project option** instead of **Create a new project**, and select the project you want to create the new adapter in from the **Project name** dropdown menu. As before, annotate your images or provide a manifest file with annotations.

![\[Interface for creating a new content moderation adapter or adding to an existing project, with options to name the adapter and project.\]](http://docs.aws.amazon.com/rekognition/latest/dg/images/adapters-9-create-new-adapter.png)


## Using your adapter
<a name="using-adapters-tutorial-using-adapter"></a>

After you have created your adapter you can supply it to a supported Rekognition operation like [DetectModerationLabels](https://docs.aws.amazon.com/rekognition/latest/APIReference/API_DetectModerationLabels.html). To see code samples you can use to carry out inference with your adapter, select the “Use adapter” tab, where you can see code samples for both the AWS CLI and Python. You can also visit the respective section of the documentation for the operation you have created an adapter for to see more code samples, setup instructions, and a sample JSON. 

![\[Interface showing locations for test data, training data, and output data with corresponding S3 URL fields. Options to use an adapter, view training images and tags, and access adapter details including its ID and code samples for AWS CLI and Python to use the trained adapter.\]](http://docs.aws.amazon.com/rekognition/latest/dg/images/adapters-12-use-adapter.png)


## Deleting your adapter and project
<a name="using-adapters-tutorial-deleting-adapter"></a>

You can delete individual adapters, or delete your project. You must delete each adapter associated with your project before you can delete the project itself.

1. To delete an adapter associated with the project, choose the adapter and then choose **Delete**.

1. To delete a project, choose the project you want to delete and then choose **Delete**.

# Evaluating and improving your adapter
<a name="using-adapters-evaluating-improving"></a>

After every round of adapter training, you’ll want to review the performance metrics in the Rekognition Console tool to determine how close the adapter is to your desired level of performance. You can then further improve your adapter’s accuracy for your images by uploading a new batch of training images and training a new adapter inside your project. Once you have created an improved version of the adapter, you can use the console to delete any older versions of the adapter that you no longer need. 

You can also retrieve metrics using the [DescribeProjectVersions](https://docs.aws.amazon.com/rekognition/latest/APIReference/API_DescribeProjectVersions.html) API operation.

## Performance metrics
<a name="using-adapters-performance-metrics"></a>

Once you have finished the training process and created your adapter, it's important to evaluate how well the adapter is extracting information from your images.

Two metrics are provided in the Rekognition Console to assist you in analyzing your adapter's performance: false positive improvement and false negative improvement. 

You can view these metrics for any adapter by selecting the "Adapter performance" tab in the adapter portion of the console. The adapter performance panel shows the False Positive Improvement and False Negative Improvement rates for the adapter that you created. 

False positive improvement measures how much the adapter’s recognition of false-positives has improved over the base model. If the false positive improvement value is 25%, that means that the adapter improved its recognition of false positives by 25% on the test dataset.

False negative improvement measures how much the adapter’s recognition of false-negatives has improved over the base model. If the false negative improvement value is 25%, that means that the adapter improved its recognition of false negatives by 25% on the test dataset.

The Per Label Performance tab can be used to compare the adapter and base model performance on each label category. It shows counts of false positive and false negative predictions by both the base model and the adapter, stratified by label category. By reviewing these metrics you can determine where the adapter needs improvement.

For example, if the Base Model False Negative rate for the Alcohol label category is 15 while the Adapter False Negative Rate is 15 or higher, you know that you should focus on adding more images containing the Alcohol label when creating a new adapter.

When using the Rekognition API operations, the F1-Score metric is returned when calling the [DescribeProjectVersions](https://docs.aws.amazon.com/rekognition/latest/APIReference/API_DescribeProjectVersions.html) operationn.

## Improving your model
<a name="using-adapters-improving-model"></a>

Adapter deployment is an iterative process, as you’ll likely need to train an adapter several times to reach your target level of accuracy. After you create and train your adapter, you’ll want to test and evaluate your adapter’s performance on various types of labels. 

If your adapter’s accuracy is lacking in any area, add new examples of those images to increase the adapter’s performance for those labels. Try to provide the adapter with additional, varied examples which reflects the cases where it struggles. Providing your adapter with representative, varied images enables it to handle diverse real-world examples.

After adding new images to your training set, retrain the adapter, then re-evaluate on your test set and labels. Repeat this process until the adapter reaches your desired level of performance. If you provide more representative images and annotations, false positive and false negative scores. will gradually improve over successive training iterations.

# Manifest file formats
<a name="using-adapters-manifest-files"></a>

The following sections show samples of the manifest file formats for input, output, and evaluation files.

## Input manifest
<a name="using-adapters-manifest-files-input"></a>

A manifest file is a json-line delimited file, with each line containing a JSON that holds information about a single image. 

Each entry in the Input Manifest must contain the `source-ref` field with a path to the image in the Amazon S3 bucket and, for Custom Moderation, the `content-moderation-groundtruth` field with ground annotations. All images in one dataset are expected to be in the same bucket. The structure is common to both training and testing manifest files.

The `CreateProjectVersion` operation for Custom Moderation uses the information provided in the Input Manifest to train an adapter. 

The following example is one line of a manifest file for a single image that contains single unsafe class:

```
{
   "source-ref": "s3://foo/bar/1.jpg",
   "content-moderation-groundtruth": {
        "ModerationLabels": [
            { 
                "Name": "Rude Gesture"
            }
        ]
   }
}
```

The following example is one line of a manifest file for a single, unsafe image that contains multiple unsafe classes, specifically Nudity and Rude Gesture.

```
{
   "source-ref": "s3://foo/bar/1.jpg",
   "content-moderation-groundtruth": {
        "ModerationLabels": [
            { 
                "Name": "Rude Gesture"
            },
            {
                "Name": "Nudity"
            }
        ]
   }
}
```

The following example is one line of a manifest file for a single image that does not contain any unsafe classes:

```
{
   "source-ref": "s3://foo/bar/1.jpg",
   "content-moderation-groundtruth": {
        "ModerationLabels": []
   }
}
```

For the complete list of supported labels refer to [Moderating content](https://docs.aws.amazon.com/rekognition/latest/dg/moderation.html).



## Output manifest
<a name="using-adapters-manifest-files-output"></a>

On completion of a training job, an output manifest file is returned. The output manifest file is a JSON-line delimited file with each line containing a JSON that holds information for a single image. Amazon S3 Path to the OutputManifest can be obtained from `DescribeProjectVersion` response:
+  `TrainingDataResult.Output.Assets[0].GroundTruthManifest.S3Object` for training dataset 
+  `TestingDataResult.Output.Assets[0].GroundTruthManifest.S3Object` for testing dataset 

The following information is returned for each entry in the Output Manifest:


|  |  | 
| --- |--- |
| Key Name | Description | 
|  source-ref  | Reference to an image in s3 that was provided in the input maniefst | 
|  content-moderation-groundtruth  | Ground truth annotations that were provided in the input manifest | 
|  detect-moderation-labels  | Adapter predictions, part of the testing dataset only | 
|  detect-moderation-labels-base-model  | Base model predictions, part of the testing dataset only | 

Adapter and Base model predictions are returned at ConfidenceTrehsold 5.0 in the format that is similar to the [DetectModerationLabels](https://docs.aws.amazon.com/rekognition/latest/APIReference/API_DetectModerationLabels.html) response.

The following example shows structure of the Adapter and Base model predictions:

```
{
   "ModerationLabels": [ 
      { 
         "Confidence": number,
         "Name": "string",
         "ParentName": "string"
      }
   ],
   "ModerationModelVersion": "string",
   "ProjectVersion": "string"
}
```

For the complete list of labels returned refer to [Moderating content](https://docs.aws.amazon.com/rekognition/latest/dg/moderation.html).

## Evaluation results manifest
<a name="using-adapters-manifest-files-eval"></a>

On completion of a training job, an evaluation result manifest file is returned. The evaluation results manifest is a JSON file output by the training job, and it contains information on how well the adapter performed on the test data.

Amazon S3 Path to the evaluation results manifest can be obtained from the `EvaluationResult.Summary.S3Object` field in the DescribeProejctVersion response.

The following example shows the structure of the evaluation results manifest:

```
{
    "AggregatedEvaluationResults": {
       "F1Score": number
    },

    "EvaluationDetails": {
        "EvaluationEndTimestamp": "datetime",
        "Labels": [
            "string"
        ],
        "NumberOfTestingImages": number,
        "NumberOfTrainingImages": number,
        "ProjectVersionArn": "string"
    },

    "ContentModeration": {
        "InputConfidenceThresholdEvalResults": {
            "ConfidenceThreshold": float,
            "AggregatedEvaluationResults": {
                "BaseModel": {
                    "TruePositive": int,
                    "TrueNegative": int,
                    "FalsePositive": int,
                    "FalseNegative": int
                },
                "Adapter": {
                    "TruePositive": int,
                    "TrueNegative": int,
                    "FalsePositive": int,
                    "FalseNegative": int
                }
            },
            "LabelEvaluationResults": [
                {
                    "Label": "string",
                    "BaseModel": {
                        "TruePositive": int,
                        "TrueNegative": int,
                        "FalsePositive": int,
                        "FalseNegative": int
                    },
                    "Adapter": {
                        "TruePositive": int,
                        "TrueNegative": int,
                        "FalsePositive": int,
                        "FalseNegative": int
                    }
                }
            ]
        }
        "AllConfidenceThresholdsEvalResults": [
            {
                "ConfidenceThreshold": float,
                "AggregatedEvaluationResults": {
                    "BaseModel": {
                        "TruePositive": int,
                        "TrueNegative": int,
                        "FalsePositive": int,
                        "FalseNegative": int
                    },
                    "Adapter": {
                        "TruePositive": int,
                        "TrueNegative": int,
                        "FalsePositive": int,
                        "FalseNegative": int
                    }
                },
                "LabelEvaluationResults": [
                    {
                       "Label": "string",
                        "BaseModel": {
                            "TruePositive": int,
                            "TrueNegative": int,
                            "FalsePositive": int,
                            "FalseNegative": int
                        },
                        "Adapter": {
                            "TruePositive": int,
                            "TrueNegative": int,
                            "FalsePositive": int,
                            "FalseNegative": int
                        }
                    }
                ]
            }
        ]
    }
}
```

The evaluation manifest file contains:
+ Aggregated results as defined by `F1Score` 
+ Details for the evaluation job including the ProjectVersionArn, number of training images, number of testing images, and the labels the adapter was trained on.
+ Aggregated TruePositive, TrueNegative, FalsePositive, and FalseNegative results for both base model and adapter performance.
+ Per label TruePositive, TrueNegative, FalsePositive, and FalseNegative results for both base model and adapter performance, calculated at the input confidence threshold.
+ Aggregated and per label TruePositive, TrueNegative, FalsePositive, and FalseNegative results for both base model and adapter performance at different confidence thresholds. The confidence threshold ranges from 5 to 100 in steps of 5.

# Best practices for training adapters
<a name="using-adapters-best-practices"></a>

It's suggested you abide by the dollowing best practices when creating, training, and using your adapters:



1.  The sample image data should capture the representative errors that the customers intend to suppress. If the model is making repeated mistakes on visually similar images, make sure to bring many of those images for training. 

1.  Instead of only bringing in images that the model makes mistakes on a particular Moderation label, also make sure to bring in images that the model are not making mistakes on that Moderation label. 

1.  Supply a minimum of 50 False Negative samples OR 20 False Positive samples for training and a minimum of 20 samples for testing. However, supply as many annotated images as possible for better adapter performance. 

1.  Annotating all labels that matters to you for all images - if you decide that you need to annotate the occurrence for a label on an image, make sure to annotate the occurrence for this label on all other images. 

1.  The sample image data should contain as many variations on the label as possible, focusing on instances that are representative of the images that will analyzed in a production setting. 

# Setting up AutoUpdate permissions
<a name="using-adapters-autoupdate"></a>

Rekognition supports the AutoUpdate feature for custom adapters. This means automated retraining is given a best effort attempt when AutoUpdate flag is ENABLED on a project. These automatic updateds requires permission to access your Training/Testing datasets and the AWS KMS key that you train your customer adapter with. You can provide these permissions by following below steps.



## Amazon S3 Bucket Permissions
<a name="using-adapters-autoupdate-s3"></a>

 By default, all Amazon S3 buckets and objects are private. Only the resource owner, the AWS account that created the bucket, can access the bucket and any objects that it contains. However, the resource owner can choose to grant access permissions to other resources and users by writing a bucket policy.

 If you want to create or modify an Amazon S3 bucket to be used as a source of input datasets and destination of training results in a custom adapter training, you must further modify the bucket policy. To read from or write to an Amazon S3 bucket, Rekognition must have the the following permissions. 

**Rekognition Required Amazon S3 Policy**

Rekognition requires a permission policy with the following attributes:
+ The statement SID
+ The bucket name
+ The service principal name for Rekognition.
+ The resources required for Rekognition the bucket and all of its contents
+ The required actions that Rekognition needs to take.

The following policy allows Rekognition to access an Amazon S3 bucket during automated retraining.

```
{
    "Statement": [
        {
            "Effect": "Allow",
            "Sid": "AllowRekognitionAutoUpdateActions",
            "Principal": {
                "Service": "rekognition.amazonaws.com"
            },
            "Action": [
                "s3:ListBucket",
                "s3:GetObject",
                "s3:PutObject",
                "s3:HeadObject",
                "s3:HeadBucket"
            ],
            "Resource": [
                "arn:aws:s3:::amzn-s3-demo-bucket",
                "arn:aws:s3:::amzn-s3-demo-bucket/*"
            ]
        }
    ]
}
```

You can follow [this guide](https://docs.aws.amazon.com/AmazonS3/latest/userguide/add-bucket-policy.html) to add above bucket policy to your S3 bucket.

See more information on bucket policies [here](https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucket-policies.html).

## AWS KMS Key Permissions
<a name="using-adapters-autoupdate-KMS"></a>

 Rekognition allows you to provide an optional KmsKeyId while training a custom adapter. When provided, Rekognition uses this key to encrypt training and test images copied into the service for model training. The key is also used to encrypt training results and manifest files written to the output Amazon S3 bucket (OutputConfig). 

 If you choose to provide a KMS key as input to your custom adapter training (i.e. `Rekognition:CreateProjectVersion`), you must further modify the KMS Key policy to allow the Rekognition Service Principal to use this key for automated retraining in the future.Rekognition must have the the following permissions. 

**Rekognition Required AWS KMS Key Policy**

Amazon Rekognition requires a permission policy with the following attributes:
+ The statement SID
+ The service principal name for Amazon Rekognition.
+ The required actions that Amazon Rekognition needs to take.

The following key policy allows Amazon Rekognition to access an Amazon KMS key during automated retraining:

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "KeyPermissions",
            "Effect": "Allow",
            "Principal": {
                "Service": "rekognition.amazonaws.com"
            },
            "Action": [
                "kms:DescribeKey",
                "kms:GenerateDataKey",
                "kms:Decrypt"
            ],
            "Resource": "*"
        }
    ]
}
```

------

You can follow [this guide](https://docs.aws.amazon.com/kms/latest/APIReference/API_PutKeyPolicy.html) to add above AWS KMS policy to your AWS KMS key.

See more information on AWS KMS policies [here](https://docs.aws.amazon.com/kms/latest/developerguide/key-policies.html).

# AWS Health Dashboard notfication for Rekognition
<a name="using-adapters-health-notification"></a>

 Your AWS Health Dashboard provides support for notifications that come from Rekognition. These notifications provide awareness and remediation guidance on scheduled changes in Rekognition Models that may affect your applications. Only events that are specific to the Rekognition Content Moderation feature are currently available. 

The AWS Health Dashboard is part of the AWS Health service. It requires no set up and can be viewed by any user that is authenticated in your account. For more information, see [Getting started wtih the AWS Health Dashboard](https://docs.aws.amazon.com/health/latest/ug/getting-started-phd.html).

If you receive a notification message similar to the following messages, it should be treated as an alarm to take action.

**Example notification: A new model version is available for Rekognition Content Moderation.**

Rekognition publishes the `AWS_MODERATION_MODEL_VERSION_UPDATE_NOTIFICATION` event to the AWS Health Dashboard to indicate that a new version of the moderation model has been released. This event is important if you are using the DetectModerationLabels API and adapters with this API. New models can impact quality depending on your use case, and will eventually replace previous model versions. It is recommended to validate your model quality and be aware of model update timelines when you get this alert. 

If you receive a model version update notification, you should treat it as an alarm to take action. If you don't use adapters, you should evaluate the quality of the updated model on your existing use case. If you use adapters, you should train new adapters with the updated model and evaluate their quality. If you have auto-train set, new adapters will be trained automatically, and then you can evaluate their quality.

```
{
   "version": "0",
    "id": "id-number",
    "detail-type": "AWS Health Event",
    "source": "aws.health",
    "account": "123456789012",
    "time": "2023-10-06T06:27:57Z",
    "region": "region",
    "resources": [],
    "detail": {
        "eventArn": "arn:aws:health:us-east-1::event/AWS_MODERATION_MODEL_UPDATE_NOTIFICATION_event-number",
        "service": "Rekognition",
        "eventTypeCode": "AWS_MODERATION_MODEL_VERSION_UPDATE_NOTIFICATION",
        "eventScopeCode": "ACCOUNT_SPECIFIC",
        "communicationId": "communication-id-number",
        "eventTypeCategory": "scheduledChange",
        "startTime": "Fri, 05 Apr 2023 12:00:00 GMT",
        "lastUpdatedTime": "Fri, 05 Apr 2023 12:00:00 GMT",
        "statusCode": "open",
        "eventRegion": "us-east-1",
        "eventDescription": [
            {
                "language": "en_US",
                "latestDescription": "A new model version is available for Rekognition Content Moderation."
            }
        ]
    }
}
```

 See [Monitoring AWS Health events with Amazon EventBridge](https://docs.aws.amazon.com/health/latest/ug/cloudwatch-events-health.html) to detect and react to AWS Health events using EventBridge. 

# Reviewing inappropriate content with Amazon Augmented AI
<a name="a2i-rekognition"></a>

Amazon Augmented AI (Amazon A2I) enables you to build the workflows that are required for human review of machine learning predictions.

Amazon Rekognition is directly integrated with Amazon A2I so that you can easily implement human review for the use case of detecting unsafe images. Amazon A2I provides a human review workflow for image moderation. This enables you to easily review predictions from Amazon Rekognition. You can define confidence thresholds for your use case and adjust them over time. With Amazon A2I, you can use a pool of reviewers within your own organization or Amazon Mechanical Turk. You can also use workforce vendors that are prescreened by AWS for quality and adherence to security procedures.

The following steps walk you through how to set up Amazon A2I with Amazon Rekognition. First, you create a flow definition with Amazon A2I that has the conditions that trigger human review. Then, you pass the flow definition's Amazon Resource Name (ARN) to the Amazon Rekognition `DetectModerationLabel` operation. In the `DetectModerationLabel` response, you can see if human review is required. The results of human review are available in an Amazon S3 bucket that is set by the flow definition.

To view an end-to-end demonstration of how to use Amazon A2I with Amazon Rekognition, see one of the following tutorials in the *Amazon SageMaker AI Developer Guide*.
+ [Demo: Get Started in the Amazon A2I Console](https://docs.aws.amazon.com/sagemaker/latest/dg/a2i-get-started-console.html)
+ [Demo: Get Started Using the Amazon A2I API](https://docs.aws.amazon.com/sagemaker/latest/dg/a2i-get-started-api.html)

  To get started using the API, you can also run an example Jupyter notebook. See [Use a SageMaker Notebook Instance with Amazon A2I Jupyter Notebook](https://docs.aws.amazon.com/sagemaker/latest/dg/a2i-task-types-general.html#a2i-task-types-notebook-demo) to use the notebook [Amazon Augmented AI (Amazon A2I) integration with Amazon Rekognition [Example]](https://github.com/aws-samples/amazon-a2i-sample-jupyter-notebooks/blob/master/Amazon%20Augmented%20AI%20(A2I)%20and%20Rekognition%20DetectModerationLabels.ipynb) in a SageMaker AI notebook instance.

**Running DetectModerationLabels with Amazon A2I**
**Note**  
Create all of your Amazon A2I resources and Amazon Rekognition resources in the same AWS Region.

1. Complete the prerequisites that are listed in [Getting Started with Amazon Augmented AI](https://docs.aws.amazon.com/sagemaker/latest/dg/a2i-getting-started.html) in the *SageMaker AI Documentation*.

   Additionally, remember to set up your IAM permissions as in the page [ Permissions and Security in Amazon Augmented AI](https://docs.aws.amazon.com/sagemaker/latest/dg/a2i-permissions-security.html) in the *SageMaker AI Documentation*.

1. Follow the instructions for [Creating a Human Review Workflow](https://docs.aws.amazon.com/sagemaker/latest/dg/create-human-review-console.html) in the *SageMaker AI Documentation*.

   A human review workflow manages the processing of an image. It holds the conditions that trigger a human review, the work team that the image is sent to, the UI template that the work team uses, and the Amazon S3 bucket that the work team's results are sent to.

   Within your `CreateFlowDefinition` call, you need to set the `HumanLoopRequestSource` to "AWS/Rekognition/DetectModerationLabels/Image/V3". After that, you need to decide how you want to set up your conditions that trigger human review.

   With Amazon Rekognition you have two options for `ConditionType`: `ModerationLabelConfidenceCheck`, and `Sampling`.

   `ModerationLabelConfidenceCheck` creates a human loop when confidence of a moderation label is within a range. Finally, `Sampling` sends a random percent of the documents processed for human review. Each `ConditionType` uses a different set of `ConditionParameters` to set what results in human review.

   `ModerationLabelConfidenceCheck` has the `ConditionParameters` `ModerationLableName` which sets the key that needs to be reviewed by humans. Additionally, it has confidence, which set the percentage range for sending to human review with LessThan, GreaterThan, and Equals. `Sampling` has `RandomSamplingPercentage` which sets a percent of documents that will be sent to human review.

   The following code example is a partial call of `CreateFlowDefinition`. It sends an image for human review if it's rated less than 98% on the label "Suggestive", and more than 95% on the label "Female Swimwear or Underwear". This means that if the image isn't considered suggestive but does have a woman in underwear or swimwear, you can double check the image by using human review.

   ```
       def create_flow_definition():
       '''
       Creates a Flow Definition resource
   
       Returns:
       struct: FlowDefinitionArn
       '''
       humanLoopActivationConditions = json.dumps(
           {
               "Conditions": [
                   {
                     "And": [
                       {
                           "ConditionType": "ModerationLabelConfidenceCheck",
                           "ConditionParameters": {
                               "ModerationLabelName": "Suggestive",
                               "ConfidenceLessThan": 98
                           }
                       },
                       {
                           "ConditionType": "ModerationLabelConfidenceCheck",
                           "ConditionParameters": {
                               "ModerationLabelName": "Female Swimwear Or Underwear",
                               "ConfidenceGreaterThan": 95
                           }
                       }
                     ]
                  }
               ]
           }
       )
   ```

   `CreateFlowDefinition` returns a `FlowDefinitionArn`, which you use in the next step when you call `DetectModerationLabels`.

   For more information see [CreateFlowDefinition](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateFlowDefinition.html) in the * SageMaker AI API Reference*.

1. Set the `HumanLoopConfig` parameter when you call `DetectModerationLabels`, as in [Detecting inappropriate images](procedure-moderate-images.md). See step 4 for examples of a `DetectModerationLabels` call with `HumanLoopConfig` set.

   1. Within the `HumanLoopConfig` parameter, set the `FlowDefinitionArn` to the ARN of the flow definition that you created in step 2.

   1. Set your `HumanLoopName`. This should be unique within a Region and must be lowercase.

   1. (Optional) You can use `DataAttributes` to set whether or not the image you passed to Amazon Rekognition is free of personally identifiable information. You must set this parameter in order to send information to Amazon Mechanical Turk.

1. Run `DetectModerationLabels`.

   The following examples show how to use the AWS CLI and AWS SDK for Python (Boto3) to run `DetectModerationLabels` with `HumanLoopConfig` set.

------
#### [ AWS SDK for Python (Boto3) ]

   The following request example uses the SDK for Python (Boto3). For more information, see [detect\$1moderation\$1labels](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/rekognition.html#Rekognition.Client.detect_moderation_labels) in the *AWS SDK for Python (Boto) API Reference*.

   ```
   import boto3
   
   rekognition = boto3.client("rekognition", aws-region)
   
   response = rekognition.detect_moderation_labels( \
           Image={'S3Object': {'Bucket': bucket_name, 'Name': image_name}}, \
           HumanLoopConfig={ \
               'HumanLoopName': 'human_loop_name', \
               'FlowDefinitionArn': , "arn:aws:sagemaker:aws-region:aws_account_number:flow-definition/flow_def_name" \
               'DataAttributes': {'ContentClassifiers': ['FreeOfPersonallyIdentifiableInformation','FreeOfAdultContent']}
            })
   ```

------
#### [ AWS CLI ]

   The following request example uses the AWS CLI. For more information, see [detect-moderation-labels](https://docs.aws.amazon.com/cli/latest/reference/rekognition/detect-moderation-labels.html) in the *[AWS CLI Command Reference](https://docs.aws.amazon.com/cli/latest/reference/)*.

   ```
   $ aws rekognition detect-moderation-labels \
       --image "S3Object={Bucket='bucket_name',Name='image_name'}" \
       --human-loop-config HumanLoopName="human_loop_name",FlowDefinitionArn="arn:aws:sagemaker:aws-region:aws_account_number:flow-definition/flow_def_name",DataAttributes='{ContentClassifiers=["FreeOfPersonallyIdentifiableInformation", "FreeOfAdultContent"]}'
   ```

   ```
   $ aws rekognition detect-moderation-labels \
       --image "S3Object={Bucket='bucket_name',Name='image_name'}" \
       --human-loop-config \
           '{"HumanLoopName": "human_loop_name", "FlowDefinitionArn": "arn:aws:sagemaker:aws-region:aws_account_number:flow-definition/flow_def_name", "DataAttributes": {"ContentClassifiers": ["FreeOfPersonallyIdentifiableInformation", "FreeOfAdultContent"]}}'
   ```

------

   When you run `DetectModerationLabels` with `HumanLoopConfig` enabled, Amazon Rekognition calls the SageMaker AI API operation `StartHumanLoop`. This command takes the response from `DetectModerationLabels` and checks it against the flow definition's conditions in the example. If it meets the conditions for review, it returns a `HumanLoopArn`. This means that the members of the work team that you set in your flow definition now can review the image. Calling the Amazon Augmented AI runtime operation `DescribeHumanLoop` provides information about the outcome of the loop. For more information, see [ DescribeHumanLoop](https://docs.aws.amazon.com/augmented-ai/2019-11-07/APIReference/API_DescribeHumanLoop.html) in the *Amazon Augmented AI API Reference documentation*.

   After the image has been reviewed, you can see the results in the bucket that is specified in your flow definition's output path. Amazon A2I will also notify you with Amazon CloudWatch Events when the review is complete. To see what events to look for, see [CloudWatch Events](https://docs.aws.amazon.com/sagemaker/latest/dg/augmented-ai-cloudwatch-events.html) in the *SageMaker AI Documentation*.

   For more information, see [Getting Started with Amazon Augmented AI](https://docs.aws.amazon.com/sagemaker/latest/dg/a2i-getting-started.html) in the *SageMaker AI Documentation*.