

# Transforming a COCO dataset into a manifest file format


[COCO](http://cocodataset.org/#home) is a format for specifying large-scale object detection, segmentation, and captioning datasets. This Python [example](md-coco-transform-example.md) shows you how to transform a COCO object detection format dataset into an Amazon Rekognition Custom Labels [bounding box format manifest file](md-create-manifest-file-object-detection.md). This section also includes information that you can use to write your own code.

A COCO format JSON file consists of five sections providing information for *an entire dataset*. For more information, see [The COCO dataset format](md-coco-overview.md). 
+ `info` – general information about the dataset. 
+ `licenses `– license information for the images in the dataset.
+ [`images`](md-coco-overview.md#md-coco-images) – a list of images in the dataset.
+ [`annotations`](md-coco-overview.md#md-coco-annotations) – a list of annotations (including bounding boxes) that are present in all images in the dataset.
+ [`categories`](md-coco-overview.md#md-coco-categories) – a list of label categories.

You need information from the `images`, `annotations`, and `categories` lists to create an Amazon Rekognition Custom Labels manifest file.

An Amazon Rekognition Custom Labels manifest file is in JSON lines format where each line has the bounding box and label information for one or more objects *on an image*. For more information, see [Object localization in manifest files](md-create-manifest-file-object-detection.md).

## Mapping COCO Objects to a Custom Labels JSON Line


To transform a COCO format dataset, you map the COCO dataset to an Amazon Rekognition Custom Labels manifest file for object localization. For more information, see [Object localization in manifest files](md-create-manifest-file-object-detection.md). To build a JSON line for each image, the manifest file needs to map the COCO dataset `image`, `annotation`, and `category` object field IDs. 

The following is an example COCO manifest file. For more information, see [The COCO dataset format](md-coco-overview.md).

```
{
    "info": {
        "description": "COCO 2017 Dataset","url": "http://cocodataset.org","version": "1.0","year": 2017,"contributor": "COCO Consortium","date_created": "2017/09/01"
    },
    "licenses": [
        {"url": "http://creativecommons.org/licenses/by/2.0/","id": 4,"name": "Attribution License"}
    ],
    "images": [
        {"id": 242287, "license": 4, "coco_url": "http://images.cocodataset.org/val2017/xxxxxxxxxxxx.jpg", "flickr_url": "http://farm3.staticflickr.com/2626/xxxxxxxxxxxx.jpg", "width": 426, "height": 640, "file_name": "xxxxxxxxx.jpg", "date_captured": "2013-11-15 02:41:42"},
        {"id": 245915, "license": 4, "coco_url": "http://images.cocodataset.org/val2017/nnnnnnnnnnnn.jpg", "flickr_url": "http://farm1.staticflickr.com/88/xxxxxxxxxxxx.jpg", "width": 640, "height": 480, "file_name": "nnnnnnnnnn.jpg", "date_captured": "2013-11-18 02:53:27"}
    ],
    "annotations": [
        {"id": 125686, "category_id": 0, "iscrowd": 0, "segmentation": [[164.81, 417.51,......167.55, 410.64]], "image_id": 242287, "area": 42061.80340000001, "bbox": [19.23, 383.18, 314.5, 244.46]},
        {"id": 1409619, "category_id": 0, "iscrowd": 0, "segmentation": [[376.81, 238.8,........382.74, 241.17]], "image_id": 245915, "area": 3556.2197000000015, "bbox": [399, 251, 155, 101]},
        {"id": 1410165, "category_id": 1, "iscrowd": 0, "segmentation": [[486.34, 239.01,..........495.95, 244.39]], "image_id": 245915, "area": 1775.8932499999994, "bbox": [86, 65, 220, 334]}
    ],
    "categories": [
        {"supercategory": "speaker","id": 0,"name": "echo"},
        {"supercategory": "speaker","id": 1,"name": "echo dot"}
    ]
}
```

The following diagram shows how the COCO dataset lists for a *dataset* map to Amazon Rekognition Custom Labels JSON lines for an *image*. Every JSON line for an image posseess a source-ref, job, and job metadata field. Matching colors indicate information for a single image. Note that in the manifest an individual image may have multiple annotations and metadata/categories.

![\[Diagram showing the structure of Coco Manifest, with images, annotations, and categories contained within it.\]](http://docs.aws.amazon.com/rekognition/latest/customlabels-dg/images/coco-transform.png)


**To get the COCO objects for a single JSON line**

1. For each image in the images list, get the annotation from the annotations list where the value of the annotation field `image_id` matches the image `id` field.

1. For each annotation matched in step 1, read through the `categories` list and get each `category` where the value of the `category` field `id` matches the `annotation` object `category_id` field.

1. Create a JSON line for the image using the matched `image`, `annotation`, and `category` objects. To map the fields, see [Mapping COCO object fields to a Custom Labels JSON line object fields](#md-mapping-fields-coco). 

1. Repeat steps 1–3 until you have created JSON lines for each `image` object in the `images` list.

For example code, see [Transforming a COCO dataset](md-coco-transform-example.md).

## Mapping COCO object fields to a Custom Labels JSON line object fields


After you identify the COCO objects for an Amazon Rekognition Custom Labels JSON line, you need to map the COCO object fields to the respective Amazon Rekognition Custom Labels JSON line object fields. The following example Amazon Rekognition Custom Labels JSON line maps one image (`id`=`000000245915`) to the preceding COCO JSON example. Note the following information.
+ `source-ref` is the location of the image in an Amazon S3 bucket. If your COCO images aren't stored in an Amazon S3 bucket, you need to move them to an Amazon S3 bucket.
+ The `annotations` list contains an `annotation` object for each object on the image. An `annotation` object includes bounding box information (`top`, `left`,`width`, `height`) and a label identifier (`class_id`).
+ The label identifier (`class_id`) maps to the `class-map` list in the metadata. It lists the labels used on the image.

```
{
	"source-ref": "s3://custom-labels-bucket/images/000000245915.jpg",
	"bounding-box": {
		"image_size": {
			"width": 640,
			"height": 480,
			"depth": 3
		},
		"annotations": [{
			"class_id": 0,
			"top": 251,
			"left": 399,
			"width": 155,
			"height": 101
		}, {
			"class_id": 1,
			"top": 65,
			"left": 86,
			"width": 220,
			"height": 334
		}]
	},
	"bounding-box-metadata": {
		"objects": [{
			"confidence": 1
		}, {
			"confidence": 1
		}],
		"class-map": {
			"0": "Echo",
			"1": "Echo Dot"
		},
		"type": "groundtruth/object-detection",
		"human-annotated": "yes",
		"creation-date": "2018-10-18T22:18:13.527256",
		"job-name": "my job"
	}
}
```

Use the following information to map Amazon Rekognition Custom Labels manifest file fields to COCO dataset JSON fields. 

### source-ref


The S3 format URL for the location of the image. The image must be stored in an S3 bucket. For more information, see [source-ref](md-create-manifest-file-object-detection.md#cd-manifest-source-ref). If the `coco_url` COCO field points to an S3 bucket location, you can use the value of `coco_url` for the value of `source-ref`. Alternatively, you can map `source-ref` to the `file_name` (COCO) field and in your transform code, add the required S3 path to where the image is stored. 

### *bounding-box*


A label attribute name of your choosing. For more information, see [*bounding-box*](md-create-manifest-file-object-detection.md#md-manifest-source-bounding-box).

#### image\$1size


The size of the image in pixels. Maps to an `image` object in the [images](md-coco-overview.md#md-coco-images) list.
+ `height`-> `image.height`
+ `width`-> `image.width`
+ `depth`-> Not used by Amazon Rekognition Custom Labels but a value must be supplied.

#### annotations


A list of `annotation` objects. There’s one `annotation` for each object on the image.

#### annotation


Contains bounding box information for one instance of an object on the image. 
+ `class_id` -> numerical id mapping to Custom Label’s `class-map` list.
+ `top` -> `bbox[1]`
+ `left` -> `bbox[0]`
+ `width` -> `bbox[2]`
+ `height` -> `bbox[3]`

### *bounding-box*-metadata


Metadata for the label attribute. Includes the labels and label identifiers. For more information, see [*bounding-box*-metadata](md-create-manifest-file-object-detection.md#md-manifest-source-bounding-box-metadata).

#### Objects


An array of objects in the image. Maps to the `annotations` list by index.

##### Object

+ `confidence`->Not used by Amazon Rekognition Custom Labels, but a value (1) is required.

#### class-map


A map of the labels (classes) that apply to objects detected in the image. Maps to category objects in the [categories](md-coco-overview.md#md-coco-categories) list.
+ `id` -> `category.id`
+ `id value` -> `category.name`

#### type


Must be `groundtruth/object-detection`

#### human-annotated


Specify `yes` or `no`. For more information, see [*bounding-box*-metadata](md-create-manifest-file-object-detection.md#md-manifest-source-bounding-box-metadata).

#### creation-date -> [image](md-coco-overview.md#md-coco-images).date\$1captured


The creation date and time of the image. Maps to the [image](md-coco-overview.md#md-coco-images).date\$1captured field of an image in the COCO images list. Amazon Rekognition Custom Labels expects the format of `creation-date` to be *Y-M-DTH:M:S*.

#### job-name


A job name of your choosing. 

# The COCO dataset format


A COCO dataset consists of five sections of information that provide information for the entire dataset. The format for a COCO object detection dataset is documented at [COCO Data Format](http://cocodataset.org/#format-data). 
+ info – general information about the dataset. 
+ licenses – license information for the images in the dataset.
+ [images](#md-coco-images) – a list of images in the dataset.
+ [annotations](#md-coco-annotations) – a list of annotations (including bounding boxes) that are present in all images in the dataset.
+ [categories](#md-coco-categories) – a list of label categories.

To create a Custom Labels manifest, you use the `images`, `annotations`, and `categories` lists from the COCO manifest file. The other sections (`info`, `licences`) aren’t required. The following is an example COCO manifest file.

```
{
    "info": {
        "description": "COCO 2017 Dataset","url": "http://cocodataset.org","version": "1.0","year": 2017,"contributor": "COCO Consortium","date_created": "2017/09/01"
    },
    "licenses": [
        {"url": "http://creativecommons.org/licenses/by/2.0/","id": 4,"name": "Attribution License"}
    ],
    "images": [
        {"id": 242287, "license": 4, "coco_url": "http://images.cocodataset.org/val2017/xxxxxxxxxxxx.jpg", "flickr_url": "http://farm3.staticflickr.com/2626/xxxxxxxxxxxx.jpg", "width": 426, "height": 640, "file_name": "xxxxxxxxx.jpg", "date_captured": "2013-11-15 02:41:42"},
        {"id": 245915, "license": 4, "coco_url": "http://images.cocodataset.org/val2017/nnnnnnnnnnnn.jpg", "flickr_url": "http://farm1.staticflickr.com/88/xxxxxxxxxxxx.jpg", "width": 640, "height": 480, "file_name": "nnnnnnnnnn.jpg", "date_captured": "2013-11-18 02:53:27"}
    ],
    "annotations": [
        {"id": 125686, "category_id": 0, "iscrowd": 0, "segmentation": [[164.81, 417.51,......167.55, 410.64]], "image_id": 242287, "area": 42061.80340000001, "bbox": [19.23, 383.18, 314.5, 244.46]},
        {"id": 1409619, "category_id": 0, "iscrowd": 0, "segmentation": [[376.81, 238.8,........382.74, 241.17]], "image_id": 245915, "area": 3556.2197000000015, "bbox": [399, 251, 155, 101]},
        {"id": 1410165, "category_id": 1, "iscrowd": 0, "segmentation": [[486.34, 239.01,..........495.95, 244.39]], "image_id": 245915, "area": 1775.8932499999994, "bbox": [86, 65, 220, 334]}
    ],
    "categories": [
        {"supercategory": "speaker","id": 0,"name": "echo"},
        {"supercategory": "speaker","id": 1,"name": "echo dot"}
    ]
}
```

## images list


The images referenced by a COCO dataset are listed in the images array. Each image object contains information about the image such as the image file name. In the following example image object, note the following information and which fields are required to create an Amazon Rekognition Custom Labels manifest file.
+ `id` – (Required) A unique identifier for the image. The `id` field maps to the `id` field in the annotations array (where bounding box information is stored).
+ `license` – (Not Required) Maps to the license array. 
+ `coco_url` – (Optional) The location of the image.
+ `flickr_url` – (Not required) The location of the image on Flickr.
+ `width` – (Required) The width of the image.
+ `height` – (Required) The height of the image.
+ `file_name` – (Required) The image file name. In this example, `file_name` and `id` match, but this is not a requirement for COCO datasets. 
+ `date_captured` –(Required) the date and time the image was captured. 

```
{
    "id": 245915,
    "license": 4,
    "coco_url": "http://images.cocodataset.org/val2017/nnnnnnnnnnnn.jpg",
    "flickr_url": "http://farm1.staticflickr.com/88/nnnnnnnnnnnnnnnnnnn.jpg",
    "width": 640,
    "height": 480,
    "file_name": "000000245915.jpg",
    "date_captured": "2013-11-18 02:53:27"
}
```

## annotations (bounding boxes) list


Bounding box information for all objects on all images is stored the annotations list. A single annotation object contains bounding box information for a single object and the object's label on an image. There is an annotation object for each instance of an object on an image. 

In the following example, note the following information and which fields are required to create an Amazon Rekognition Custom Labels manifest file. 
+ `id` – (Not required) The identifier for the annotation.
+ `image_id` – (Required) Corresponds to the image `id` in the images array.
+ `category_id` – (Required) The identifier for the label that identifies the object within a bounding box. It maps to the `id` field of the categories array. 
+ `iscrowd` – (Not required) Specifies if the image contains a crowd of objects. 
+ `segmentation` – (Not required) Segmentation information for objects on an image. Amazon Rekognition Custom Labels doesn't support segmentation. 
+ `area` – (Not required) The area of the annotation.
+ `bbox` – (Required) Contains the coordinates, in pixels, of a bounding box around an object on the image.

```
{
    "id": 1409619,
    "category_id": 1,
    "iscrowd": 0,
    "segmentation": [
        [86.0, 238.8,..........382.74, 241.17]
    ],
    "image_id": 245915,
    "area": 3556.2197000000015,
    "bbox": [86, 65, 220, 334]
}
```

## categories list


Label information is stored the categories array. In the following example category object, note the following information and which fields are required to create an Amazon Rekognition Custom Labels manifest file. 
+ `supercategory` – (Not required) The parent category for a label. 
+ `id` – (Required) The label identifier. The `id` field maps to the `category_id` field in an `annotation` object. In the following example, The identifier for an echo dot is 2. 
+ `name` – (Required) the label name. 

```
        {"supercategory": "speaker","id": 2,"name": "echo dot"}
```

# Transforming a COCO dataset


Use the following Python example to transform bounding box information from a COCO format dataset into an Amazon Rekognition Custom Labels manifest file. The code uploads the created manifest file to your Amazon S3 bucket. The code also provides an AWS CLI command that you can use to upload your images. 

**To transform a COCO dataset (SDK)**

1. If you haven't already:

   1. Make sure you have `AmazonS3FullAccess` permissions. For more information, see [Set up SDK permissions](su-sdk-permissions.md).

   1. Install and configure the AWS CLI and the AWS SDKs. For more information, see [Step 4: Set up the AWS CLI and AWS SDKs](su-awscli-sdk.md).

1. Use the following Python code to transform a COCO dataset. Set the following values.
   + `s3_bucket` – The name of the S3 bucket in which you want to store the images and Amazon Rekognition Custom Labels manifest file. 
   + `s3_key_path_images` – The path to where you want to place the images within the S3 bucket (`s3_bucket`).
   + `s3_key_path_manifest_file` – The path to where you want to place the Custom Labels manifest file within the S3 bucket (`s3_bucket`).
   + `local_path` – The local path to where the example opens the input COCO dataset and also saves the new Custom Labels manifest file.
   + `local_images_path` – The local path to the images that you want to use for training.
   + `coco_manifest` – The input COCO dataset filename.
   + `cl_manifest_file` – A name for the manifest file created by the example. The file is saved at the location specified by `local_path`. By convention, the file has the extension `.manifest`, but this is not required.
   + `job_name` – A name for the Custom Labels job.

   ```
   import json
   import os
   import random
   import shutil
   import datetime
   import botocore
   import boto3
   import PIL.Image as Image
   import io
   
   #S3 location for images
   s3_bucket = 'bucket'
   s3_key_path_manifest_file = 'path to custom labels manifest file/'
   s3_key_path_images = 'path to images/'
   s3_path='s3://' + s3_bucket  + '/' + s3_key_path_images
   s3 = boto3.resource('s3')
   
   #Local file information
   local_path='path to input COCO dataset and output Custom Labels manifest/'
   local_images_path='path to COCO images/'
   coco_manifest = 'COCO dataset JSON file name'
   coco_json_file = local_path + coco_manifest
   job_name='Custom Labels job name'
   cl_manifest_file = 'custom_labels.manifest'
   
   label_attribute ='bounding-box'
   
   open(local_path + cl_manifest_file, 'w').close()
   
   # class representing a Custom Label JSON line for an image
   class cl_json_line:  
       def __init__(self,job, img):  
   
           #Get image info. Annotations are dealt with seperately
           sizes=[]
           image_size={}
           image_size["width"] = img["width"]
           image_size["depth"] = 3
           image_size["height"] = img["height"]
           sizes.append(image_size)
   
           bounding_box={}
           bounding_box["annotations"] = []
           bounding_box["image_size"] = sizes
   
           self.__dict__["source-ref"] = s3_path + img['file_name']
           self.__dict__[job] = bounding_box
   
           #get metadata
           metadata = {}
           metadata['job-name'] = job_name
           metadata['class-map'] = {}
           metadata['human-annotated']='yes'
           metadata['objects'] = [] 
           date_time_obj = datetime.datetime.strptime(img['date_captured'], '%Y-%m-%d %H:%M:%S')
           metadata['creation-date']= date_time_obj.strftime('%Y-%m-%dT%H:%M:%S') 
           metadata['type']='groundtruth/object-detection'
           
           self.__dict__[job + '-metadata'] = metadata
   
   
   print("Getting image, annotations, and categories from COCO file...")
   
   with open(coco_json_file) as f:
   
       #Get custom label compatible info    
       js = json.load(f)
       images = js['images']
       categories = js['categories']
       annotations = js['annotations']
   
       print('Images: ' + str(len(images)))
       print('annotations: ' + str(len(annotations)))
       print('categories: ' + str(len (categories)))
   
   
   print("Creating CL JSON lines...")
       
   images_dict = {image['id']: cl_json_line(label_attribute, image) for image in images}
   
   print('Parsing annotations...')
   for annotation in annotations:
   
       image=images_dict[annotation['image_id']]
   
       cl_annotation = {}
       cl_class_map={}
   
       # get bounding box information
       cl_bounding_box={}
       cl_bounding_box['left'] = annotation['bbox'][0]
       cl_bounding_box['top'] = annotation['bbox'][1]
    
       cl_bounding_box['width'] = annotation['bbox'][2]
       cl_bounding_box['height'] = annotation['bbox'][3]
       cl_bounding_box['class_id'] = annotation['category_id']
   
       getattr(image, label_attribute)['annotations'].append(cl_bounding_box)
   
   
       for category in categories:
            if annotation['category_id'] == category['id']:
               getattr(image, label_attribute + '-metadata')['class-map'][category['id']]=category['name']
           
       
       cl_object={}
       cl_object['confidence'] = int(1)  #not currently used by Custom Labels
       getattr(image, label_attribute + '-metadata')['objects'].append(cl_object)
   
   print('Done parsing annotations')
   
   # Create manifest file.
   print('Writing Custom Labels manifest...')
   
   for im in images_dict.values():
   
       with open(local_path+cl_manifest_file, 'a+') as outfile:
               json.dump(im.__dict__,outfile)
               outfile.write('\n')
               outfile.close()
   
   # Upload manifest file to S3 bucket.
   print ('Uploading Custom Labels manifest file to S3 bucket')
   print('Uploading'  + local_path + cl_manifest_file + ' to ' + s3_key_path_manifest_file)
   print(s3_bucket)
   s3 = boto3.resource('s3')
   s3.Bucket(s3_bucket).upload_file(local_path + cl_manifest_file, s3_key_path_manifest_file + cl_manifest_file)
   
   # Print S3 URL to manifest file,
   print ('S3 URL Path to manifest file. ')
   print('\033[1m s3://' + s3_bucket + '/' + s3_key_path_manifest_file + cl_manifest_file + '\033[0m') 
   
   # Display aws s3 sync command.
   print ('\nAWS CLI s3 sync command to upload your images to S3 bucket. ')
   print ('\033[1m aws s3 sync ' + local_images_path + ' ' + s3_path + '\033[0m')
   ```

1. Run the code.

1. In the program output, note the `s3 sync` command. You need it in the next step.

1. At the command prompt, run the `s3 sync` command. Your images are uploaded to the S3 bucket. If the command fails during upload, run it again until your local images are synchronized with the S3 bucket.

1. In the program output, note the S3 URL path to the manifest file. You need it in the next step.

1. Follow the instruction at [Creating a dataset with a SageMaker AI Ground Truth manifest file (Console)](md-create-dataset-ground-truth.md#md-create-dataset-ground-truth-console) to create a dataset with the uploaded manifest file. For step 8, in **.manifest file location**, enter the Amazon S3 URL you noted in the previous step. If you are using the AWS SDK, do [Creating a dataset with a SageMaker AI Ground Truth manifest file (SDK)](md-create-dataset-ground-truth.md#md-create-dataset-ground-truth-sdk).