

# Working with jobs
<a name="working-with-jobs"></a>

A job does the work of transcoding one or more media files. In addition to creating jobs, you can duplicate, export, import, and cancel jobs. You can also view job history, and search for jobs.

This chapter provides step-by-step instructions on how to work with MediaConvert jobs. It also provides an introduction to key concepts within jobs, basic example job settings, and details about important input and output settings that apply to common job configurations.

**Topics**
+ [Creating a job](creating-a-job.md)
+ [Duplicating a job](create-new-job-from-completed-job.md)
+ [Exporting and importing jobs](exporting-and-importing-jobs.md)
+ [Viewing your job history](viewing-job-history.md)
+ [Canceling a job](canceling-a-job.md)
+ [Tutorial: Configuring job settings](setting-up-a-job.md)
+ [Example job settings JSONs](example-job-settings.md)
+ [Input settings](specifying-inputs.md)
+ [Output settings](output-settings.md)

# Creating a job
<a name="creating-a-job"></a>

To create a job, you specify your input settings, output settings, and any job-wide settings. For a detailed step-by-step procedure, see [Tutorial: Configuring job settings](setting-up-a-job.md). The following procedure is a high level overview of how to create a job using the AWS Management Console.

When you create a job, you submit it to a queue for processing. Processing begins automatically from your queues as resources allow. For information about resource allocation, see [Processing multiple jobs in parallel](working-with-on-demand-queues.md#queue-resources). 

**To create a job using the MediaConvert console**

1. Open the [Jobs](https://console.aws.amazon.com/mediaconvert/home#/jobs/list) page in the MediaConvert console.

1. Choose **Create job**.

1. On the **Create job** page, provide transcode instructions and job settings. For more information, see [Tutorial: Configuring job settings](setting-up-a-job.md). 

   Make sure that you select the same AWS Region for your job and your file storage. 

1. Choose **Create**.

You can also create a job using a [Template](using-a-job-template.md), [Preset](using-a-preset-to-specify-a-job-output.md), [duplicated job](create-new-job-from-completed-job.md), or [job settings JSON](exporting-and-importing-jobs.md).

# Duplicating a job
<a name="create-new-job-from-completed-job"></a>

To create a job that is similar to one that you ran before, you can duplicate a job from your job history. You can also modify any settings if you want to change them.

**To create a job based on a recent job using the MediaConvert console**

1. Open the [Jobs](https://console.aws.amazon.com/mediaconvert/home#/jobs/list) page in the MediaConvert console.

1. Choose the **Job ID** of the job that you want to duplicate.

1. Choose **Duplicate**.

1. Optionally modify any job settings. 

   Settings that are likely to change from job to job include the following: input file location, output destination locations, and output name modifiers. If you run transcoding jobs for your customers who have different AWS accounts from your account, you also must change the **IAM role** under **Job settings**.

1. Choose **Create** at the bottom of the page.

# Exporting and importing jobs
<a name="exporting-and-importing-jobs"></a>

Completed MediaConvert jobs remain on the **Jobs** page for three months. If you want to run a new job based on a completed job more than three months after you run it, export the job after it is complete and save it. Depending on how many jobs you run, exporting and then importing a job can be simpler than finding a particular job in your list and duplicating it.

**To export a job using the MediaConvert console**

1. Open the [Jobs](https://console.aws.amazon.com/mediaconvert/home#/jobs/list) page in the MediaConvert console.

1. Choose the **Job ID** of the job that you want to export.

1. On the **Job summary** page, choose the **View JSON** button.

1. Choose **Copy** to copy the JSON to your clipboard.

1. Paste into your JSON editor and save.

**To import a job using the MediaConvert console**

1. Open the [Jobs](https://console.aws.amazon.com/mediaconvert/home#/jobs/list) page in the MediaConvert console.

1. Choose **Import job**.

# Viewing your job history
<a name="viewing-job-history"></a>

You can view the recent history of MediaConvert jobs that you created with your AWS account in a given AWS Region. After three months, the service automatically deletes the record of a job.

The **Jobs** page shows jobs that are successfully completed, are canceled, are being processed, are waiting in the queue, and that ended in error. You can filter the job history list by the status and by the queue that the jobs were sent to. You can also choose a specific job from the list to view the job's settings.

------
#### [ Console  ]

To view your jobs using the MediaConvert console

1. Open the [Jobs](https://console.aws.amazon.com/mediaconvert/home#/jobs/list) page in the MediaConvert console.

1. Optionally, filter the list by status and queue by choosing from the dropdown lists.

1. To see details for a job, choose a **Job ID** to view its **Job summary** page.

------
#### [ CLI  ]

The following `list-jobs` example lists up to twenty of your most recently created jobs.

```
aws mediaconvert list-jobs
```

For more information about how to cancel a job using the AWS CLI, see the [AWS CLI command reference](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/mediaconvert/list-jobs.html).

------

# Canceling a job
<a name="canceling-a-job"></a>

The following procedure explains how to cancel a job using the AWS Elemental MediaConvert console. 

------
#### [ Console  ]

To cancel a job using the MediaConvert console

1. Open the [Jobs](https://console.aws.amazon.com/mediaconvert/home#/jobs/list) page in the MediaConvert console.

1. Select the **Job ID** of the job that you want to cancel by choosing the option (![\[Empty circle outline representing a placeholder or selection option.\]](http://docs.aws.amazon.com/mediaconvert/latest/ug/images/circle-icon.png)) next to it.

1. Choose **Cancel job**.

------
#### [ CLI  ]

The following `cancel-job` example cancels a job.

```
aws mediaconvert cancel-job \
	--id 1234567890123-efg456
```

For more information about how to cancel a job using the AWS CLI, see the [AWS CLI command reference](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/mediaconvert/cancel-job.html).

------

# Tutorial: Configuring job settings
<a name="setting-up-a-job"></a>

This page provides step-by-step guidance on how to configure a job in MediaConvert.

To configure a job, you define input files for the service to transcode, and you specify the source for each video, audio, and captions media element. That source might be a specific part of the primary input file, or it might be a separate file. Next, you specify the types of output files and packages that you want AWS Elemental MediaConvert to generate from the input. You also specify the detailed encoding settings to produce the quality and type of output that you want. 

This tutorial shows how to configure jobs in MediaConvert to transcode media files into different formats. 

**Topics**
+ [Optional step: Pause queues](#optional-pause-the-queue)
+ [Step 1: Specify input files](#specify-input-settings)
+ [Step 2: Create input selectors](#create-selectors)
+ [Step 3: Create output groups](#specify-output-groups)
+ [Step 4: Create outputs](#create-outputs)
+ [Step 5: Specify global job settings](#specify-global-job-settings)

## Optional step: Pause queues
<a name="optional-pause-the-queue"></a>

If you're a new customer or you're experimenting with the MediaConvert console, you can pause your queues to avoid accidentally starting a job before you're ready. For more information about queues, see [Queues](working-with-queues.md).

To pause or reactivate an on-demand queue using the AWS Management Console

1. Open the [Queues](https://console.aws.amazon.com/mediaconvert/home/#/queues/list) page in the MediaConvert console.

1. On the **Queues** page, choose the name of the queue that you want to pause or reactivate.

1. On the queue’s page, choose the **Edit queue** button.

1. On the **Edit queue** page, for **Status**, choose **Paused** or **Active**.

1. Choose **Save queue**.

## Step 1: Specify input files
<a name="specify-input-settings"></a>

The first part of setting up a MediaConvert job is specifying the location of your input file or files.

**To specify the location of your input**

1. Open the MediaConvert console at [https://console.aws.amazon.com/mediaconvert](https://console.aws.amazon.com/mediaconvert).

1. On the **Create job** page, in the **Job** pane on the left, choose **Input 1**.

1. In the **Input 1** pane, provide the URI to your video input file that is stored in Amazon S3 or on an HTTP(S) server. For Amazon S3 inputs, you can specify the URI directly or choose **Browse** to select from your Amazon S3 buckets. For HTTP(S) inputs, provide the URL to your input video file. For more information, see [HTTP input requirements](http-input-requirements.md). 
**Note**  
If your input audio or captions are in a separate file, don't create separate inputs for them. You specify these files later in this procedure, within your audio and captions selectors.

1. To join more than one input file into a single asset (input stitching), add another input to the job. To do so, in the **Job** pane, in the **Inputs** section, choose **Add**. 

   For jobs that have multiple input files, MediaConvert creates outputs by concatenating the inputs in the order that you specify them in the job. You can include up to 150 inputs in your job.
**Tip**  
You can transcode portions of your inputs. For more information, see [Input settings](specifying-inputs.md).

## Step 2: Create input selectors for video, audio, and captions
<a name="create-selectors"></a>

Next, create input selectors to flag the video, audio, and captions elements from your input that you will use in your outputs. This labels each input element so that you can point to it when you set up your outputs. When you set up input selectors, you also provide the service with information about where to find the data and how to interpret it.

**To set up your input selectors**

1. In the **Video selector** section, specify values for the fields that are applicable to your job. 

   You don't need to create a video selector because MediaConvert automatically creates a video selector when you begin setting up a job. However, the service doesn't automatically detect information about the video source. You can provide this information in the **Video selector** fields. If you keep these settings in their default state, you will create a valid job. For more information about individual settings, choose the **Info** link next to each setting.
**Note**  
 MediaConvert doesn't support inputs with multiple video streams, such as Quad 4k. Each input can have only one video selector. Therefore, there is no **Add video selector** button on the console.

1. In the **Audio selectors** section, under **Audio selector 1**, specify information about your primary audio asset. You don't need to create an audio selector 1 because the service automatically creates the first audio selector when you set up a job.
**Note**  
An *audio asset* is often dialogue, background sound, and music together in one track. Tracks often consist of multiple channels. For example, Dolby 5.1 sound has six channels per track.

   1. For **Selector type**, choose the way that your audio assets are identified. Often, this is by track. If you are using an HLS input, and would like to select an alternate audio rendition, see [Alternate HLS audio rendition requirements](using-alternate-audio-renditions.md).

   1. Provide the identifier (that is, track number, PID, or language code) for your primary audio asset. Your primary audio asset is likely to be track 1.
**Note**  
For most use cases, you associate one input track per input selector. If your use case requires combining multiple tracks into one track, or multiple tracks into one rendition of a streaming package, combine multiple input tracks in one audio selector by typing a comma-separated list. For more information about combining tracks, see [Setting up audio tracks and audio selectors](more-about-audio-tracks-selectors.md).

   1. If your audio is in a separate file from your video, choose the **External file** slider switch element and provide the URI to your audio input file that is stored in Amazon S3 or on an HTTP(S) server. For Amazon S3 inputs, you can specify the URI directly or choose **Browse** to select from your Amazon S3 buckets. For HTTP(S) inputs, provide the URL to your input video file. For more information, see [HTTP input requirements](http-input-requirements.md). 

1. If you have additional audio assets, such as multiple language tracks, choose **Add audio selector**. Then provide information about the next asset that is described in the preceding step of this procedure.

1. In the **Captions selectors** section, choose **Add captions selector**. This creates input captions selectors for any sets of captions that you plan to use in an output. For more information about setting up captions for your job, see [Setting up input captions](including-captions.md).

## Step 3: Create output groups
<a name="specify-output-groups"></a>

After specifying your input, you create output groups. The choices that you make when you set up output groups affect the types of assets that your job produces and which devices can play them.

You can use MediaConvert to create media assets that fall broadly into two categories:
+ **ABR streaming packages**. You can create adaptive bitrate (ABR) packages so that end viewers can download the asset gradually while they watch. Depending on how you set up your outputs, the end viewer's device can adapt to changes in the available bandwidth by downloading higher or lower-quality segments. ABR packages are also called ABR *stacks*, because they are made up of a stack of video, audio, and captions components. Each component in the stack or package is called a *rendition*.
+ **Standalone files**. You might create these files and host them in a location where end viewers download the entire file all at once and then view it. You might also create standalone files and then send them to downstream systems for packaging and distribution.

**To create an output group**

1. In the **Job** pane, in the **Output groups** section, choose **Add**.

1. Choose an output group type, and then choose **Select**. 

   Create one file output group for all the standalone files that you intend to create. Create one ABR streaming output group for each ABR streaming package that you intend to create. For guidance on which ABR streaming output groups to include in your job, see [Choosing your ABR streaming output groups](choosing-your-streaming-output-groups.md).

1. Optionally, for **Custom group name**, enter a name for your group. Any name that you provide here appears in the **Output groups** section of the console but does not affect your outputs.

1. For **Destination**, specify the URI for the Amazon S3 location where the transcoding service will store your output files. You can specify the URI directly or choose **Browse** to select from your Amazon S3 buckets.
**Note**  
You can optionally append a basename to your destination URI. To create the file name of your final asset, the transcoding service uses this basename and any name modifier that you provide in the individual output settings.  
If you don't provide a basename with your URI, the transcoding service generates a basename from the input 1 file name, minus the extension.

1. Specify the values for any additional settings that apply to the entire output group. These settings vary depending on the type of output group that you select. For more information about individual settings, choose the **Info** link next to each setting.

## Step 4: Create outputs
<a name="create-outputs"></a>

After you create output groups, set up your outputs in each group. The number of outputs for each output group depends on the output group type, as follows:
+ For **File** output groups, include all elements of the media asset in one output. This includes any audio or captions that you provide in a separate file. 
+ For ABR streaming output groups—**CMAF**, **Apple HLS**, **DASH ISO**, and **Microsoft Smooth Streaming**—create a separate output for each media element. That is, one output per video resolution, one output per audio track, and one output per captions language.

Choose from one of the following procedures that correspond to the output group types that you created in [Step 3: Create output groups](#specify-output-groups).

### Creating outputs in ABR streaming output groups
<a name="create-outputs-in-abr-streaming-output-groups"></a>

For each ABR streaming output group that you set up in [Step 3: Create output groups](#specify-output-groups), create and set up an output for each media element that you want in the ABR streaming package.

#### Creating video ABR streaming outputs
<a name="video-abr-streaming-outputs"></a>

For each video output that you include in your output group, MediaConvert creates one video rendition, or set of segmented video files. Multiple video renditions in a streaming package, of varying resolutions and video quality, allow the end viewer's device to adapt the quality of video to the available bandwidth.

**Note**  
Although the job has only one video *input* selector, ABR streaming output groups often have several video *outputs* per output group. 

**To create and set up video ABR streaming outputs**

1. On the **Create job** page, in the **Job** pane on the left, under **Output Groups**, below the **CMAF**, **Apple HLS**, **DASH ISO**, or **Microsoft Smooth Streaming** output group that you want add outputs to, choose **Output 1**. 

   When you create an output group, MediaConvert automatically populates the output group with output 1. You don't need to explicitly create the first output.

1. In the **Output settings** pane, for **Name modifier**, enter a value.

   MediaConvert appends the name modifier to the file names that it creates for this output. Enter a name modifier that will make it easy to identify which files came from which output, such as `-video-hi-res`.

1. If one of the predefined groups of settings listed under **Preset** is suitable for your workflow, choose it from the list. If you use a preset, skip the next step of this procedure.

1. Specify your video settings as follows:

   1. In the **Output settings** section, specify values for any remaining general settings. Depending on the output group type, these settings might include transport stream settings or other container settings. For more information about individual settings, choose the **Info** link next to each setting.

   1. In the **Stream settings** section, specify values for video encoding. The video settings are selected by default, so you don't need to explicitly choose this group of settings. 

      There is only one input video selector per job, so you don't need to explicitly choose it when you set up your video outputs.

   For more information about individual settings, choose the **Info** links on the console.

1. If your output includes a group of audio settings by default, delete it as follows:

   1. In the **Stream settings** section, choose **Audio 1**.

   1. Choose **Remove audio**.

1. If you want multiple video renditions in your ABR streaming package, repeat the preceding steps of this procedure. This will create an additional video output for each one.

#### Creating audio ABR streaming outputs
<a name="audio-abr-streaming-outputs"></a>

For each audio output that you include in your output group, MediaConvert creates one audio rendition, or set of segmented video files. The most common reason to include multiple audio renditions is to provide multiple language options. If you provide only one language, you probably need only one audio output.

**Note**  
For AAC streaming outputs, the initial segment is longer in duration than the others. This is because, with AAC, the initial segment must contain silent AAC pre-roll samples before the audible part of the segment. MediaConvert accounts for these extra samples in the timestamps, so the audio plays back correctly. 

**To create and set up audio ABR streaming outputs**

1. If you're working in a CMAF output group, skip this step. The first audio output is created for you.

   Create an output for your first audio track. Usually an audio track corresponds to one language.

   1. In the **Job** pane, choose the output group that you're working in.

   1. In the **Outputs** pane, choose **Add output**. 

   1. Choose the output that you just created.

   1. If your output includes a group of video settings by default, choose **Remove video** to delete it. This k the **Audio 1** group of settings displayed.

1. In the **Output settings** pane, for **Name modifier**, enter a value.

   MediaConvert appends the name modifier to the file names that it creates for this output. Enter a name modifier that will make it easy to identify which files came from which output, such as `-audio-english`.

1. If one of the predefined groups of settings listed under **Preset** is suitable for your workflow, choose it from the list. If you use a preset, skip the next step of this procedure.

1. Specify your audio settings as follows:

   1. In the **Output settings** section, specify values for any remaining general settings. For more information about individual settings, choose the **Info** link next to each setting.

   1. Under **Stream settings**, for **Audio source**, choose one of the audio selectors that you created in [Step 2: Create input selectors for video, audio, and captions](#create-selectors).

   1. In the **Stream settings** section, specify values for audio encoding. For more information about individual settings, choose the **Info** link next to each setting.

1. If you have additional audio assets to include in the ABR streaming package, create an output for each of them as follows:

   1. In the **Job** pane, choose the output group that you're working in.

   1. In the **Outputs** pane, choose **Add output**. 

   1. Choose the output that you just created.

   1. If your output includes a group of video settings by default, choose **Remove video** to delete it. This keeps the **Audio 1** group of settings displayed.

   1. Set up the output as described in steps 2 through 4 of this procedure.

#### Creating captions for ABR streaming outputs
<a name="captions-abr-streaming-outputs"></a>

Setting up captions can be complex. For detailed information, see [Setting up input captions](including-captions.md). For basic instructions, complete the following procedure.

**To create and set up captions for ABR streaming outputs**

1. Create an output for your first set of captions. Usually a set of captions corresponds to one language.

   1. In the **Job** pane, choose the output group that you're working in.

   1. In the **Outputs** pane, choose **Add output**. 

   1. Choose the output that you just created.

   1. If your output includes groups of video and audio settings by default, choose **Remove video** and **Remove audio** to delete them. 

   1. Choose **Add captions** to display a set of captions settings.

1. In the **Output settings** pane, for **Name modifier**, enter a value.

   MediaConvert appends the name modifier to the file names that it creates for this output. Enter a name modifier that will make it easy to identify which files came from which output, such as `-captions-english`.

1. Specify your captions settings as follows:

   1. In the **Output settings** section, specify values for any remaining general settings. For more information about individual settings, choose the **Info** link next to each setting.

   1. Under **Stream settings**, for **Captions source**, choose one of the captions selectors that you created in [Step 2: Create input selectors for video, audio, and captions](#create-selectors).

   1. In the **Stream settings** section, specify values for the remaining captions settings. 

#### Creating additional manifests
<a name="create-additional-manifests"></a>

By default, MediaConvert generates a single multivariant playlist for each of your CMAF, DASH ISO, Apple HLS, and Microsoft Smooth Streamingoutput groups. This default manifest references all the outputs in the output group. 

Optionally, you can create additional multivariant playlists that reference only a subset of the outputs in your output group. For example, you might want to create a manifest that doesn't include HDR outputs, for viewers who don't have a subscription that includes HDR.

**Note**  
For CMAF output groups, if you keep the default enabled value for **Write HLS manifest** and **Write DASH manifest**, MediaConvert creates additional manifests in both of those formats. If you disable either of those settings, MediaConvert doesn't create additional manifests in that format.

**To create an additional manifest**

1. On the **Create job** page, in the **Job** pane on the left, choose the output group that you want to create the additional manifest for.

1. In the **Additional manifests** section on the right, choose **Add manifest**.

1. For **Manifest name modifier**, enter the text that you want to be at the end of the manifest file name, before the extension. This setting is required, because it gives each manifest a different file name.

1. For **Select outputs**, choose the outputs that you want the manifest to refer to.

1. Repeat these steps to create up to 10 additional manifests. Each additional manifest must have a different value for **Manifest name modifier**.

### Creating and setting up outputs in File output groups
<a name="create-outputs-in-file-output-groups"></a>

With File output groups, each asset that the service creates corresponds to one output, rather than one output group. Each asset contains all video, audio, and captions elements. Therefore, it's simplest to set up by first creating the output, and then setting up all the output selectors. 

#### Create file outputs
<a name="create-file-outputs"></a>

If you created a file output group in [Step 3: Create output groups](#specify-output-groups), create and set up an output in the file output group for each standalone file that you intend to create.

**To create an output in a file output group**

1. When you create an output group, MediaConvert automatically populates the output group with output 1, so you don't need to explicitly create it. If you are creating only one standalone file, skip the rest of this procedure.

1. If you want to create more than one standalone file, create additional outputs as follows:

   1. On the **Create job** page, in the **Job** pane on the left, under **Output Groups**, choose **File group**.

   1. In the **Outputs** pane, choose **Add output**.

#### Set up output selectors in file outputs
<a name="set-up-output-selectors-in-file-outputs"></a>

Next, for each file output that you just created, set up output selectors. 

**To set up output selectors in a file output**

1. On the **Create job** page, in the **Job** pane on the left, under **Output Groups**, under **File group**, choose **Output 1**. 

1. In the **Output settings** pane, for **Name modifier**, enter a value.

   MediaConvert appends the name modifier to the file names that it creates for this output. Enter a name modifier that identifies which files came from which output, such as `-standalone-hi-res`.

1. If one of the predefined groups of settings listed under **Preset** is suitable for your workflow, choose it from the list. If you use a preset, skip step 4 of this procedure. 

   Output presets can contain up to one set each of video, audio, and captions settings. Therefore, if your standalone output file contains more than one audio or captions asset, you can't use a preset. If you can't use presets in your output, but you want to use the preset settings as a starting point, choose the preset, then choose **No preset** from the **Preset** dropdown list. This prepopulates your output with the same settings that are in the preset.

1. Specify your output settings as follows:

   1. In the **Output settings** section, specify values for any remaining general settings. These settings vary depending on the container that you choose. For more information about individual settings, choose the **Info** link next to each setting.

   1. In the **Stream settings** section, specify values for video encoding. For more information about individual settings, choose the **Info** link next to each setting.
**Note**  
The video settings tab is selected by default, so you don't need to explicitly choose this group of settings. There is only one input video selector per job, so you don't need to explicitly choose it when you set up your video outputs.

   1. Choose **Audio 1** to display the group of encoding settings for the first audio asset. **Audio 1** is located on the left side of the **Stream settings** pane, below **Video**.

   1. Under **Stream settings**, for **Audio source**, choose one of the audio selectors that you created in [Step 2: Create input selectors for video, audio, and captions](#create-selectors).

   1. In the **Stream settings** section, specify values for audio encoding. For more information about individual settings, choose the **Info** link next to each setting.

   1. To include captions in the output, choose **Add captions**. This displays a group of captions settings. For more information about setting up captions, see [Setting up input captions](including-captions.md).

## Step 5: Specify global job settings
<a name="specify-global-job-settings"></a>

Global job settings apply to every output that the job creates.

If your job incorporates audio or captions provided in a separate file from your input, or if you use the graphic overlay (image inserter) feature, it is especially important to get these settings right.

There are three distinct groups of timecode settings. Global job timecode configuration is one of those three. For more information about the different sets of timecode settings and how MediaConvert manages timecodes, see [Setting up timecodes](setting-up-timecode.md).

**To specify global job settings**

1. In the **Job** pane, in the **Job settings** section, choose **AWS integration**.

1. For **IAM role**, choose an IAM role that has permissions to access the Amazon S3 buckets that hold your input and output files. The IAM role must have a trusted relationship with MediaConvert. For information about creating this role, see [Setting up IAM permissions](iam-role.md).

1. Optionally, specify job-wide timecode settings in the **Timecode configuration** pane.

1. Specify values for the other job settings and enable global processors. For more information about individual settings, choose the **Info** link next to each setting.

# Example job settings JSONs
<a name="example-job-settings"></a>

The job settings in these examples represent the simplest valid jobs you can run. They work well for experimenting with the service. When you want to perform more complex transcodes or create different outputs, use the console to set up your job and to generate your JSON job specification. To do so, in the **Job** pane on the left, under **Job settings**, choose **Show job JSON**.

For more information about submitting your job programmatically, see one of the introductory topics of the *AWS Elemental MediaConvert API Reference*:
+ [Getting started with AWS Elemental MediaConvert using the AWS SDKs or the AWS CLI](https://docs.aws.amazon.com/mediaconvert/latest/apireference/custom-endpoints.html)
+ [Getting started with AWS Elemental MediaConvert using the API](https://docs.aws.amazon.com/mediaconvert/latest/apireference/getting-started.html)

**Important**  
We recommend that you use the MediaConvert console to generate your production JSON job specification.  
Your job specification must conform to validation by the transcoding engine. The transcoding engine validations represent complex dependencies among groups of settings and dependencies between your transcoding settings and properties of your input files. The MediaConvert console functions as an interactive job builder to make it easy to create valid job JSON specifications. You can use [job templates](using-a-job-template.md) and [output presets](using-a-preset-to-specify-a-job-output.md) to get started quickly.

To use these examples, replace the following placeholder values with actual values:
+ ROLE ARN
+ s3://amzn-s3-demo-bucket
+ s3://amzn-s3-demo-bucket1

**Topics**
+ [Example: MP4 output](#mp4-example)
+ [Example: ABR output](#HLS-ABR-example)
+ [Example: Automated ABR](#auto-abr-example)

## Example: MP4 output
<a name="mp4-example"></a>

```
{
  "UserMetadata": {},
  "Role": "ROLE ARN",
  "Settings": {
    "OutputGroups": [
      {
        "Name": "File Group",
        "OutputGroupSettings": {
          "Type": "FILE_GROUP_SETTINGS",
          "FileGroupSettings": {
            "Destination": "s3://amzn-s3-demo-bucket1/out"
          }
        },
        "Outputs": [
          {
            "VideoDescription": {
              "ScalingBehavior": "DEFAULT",
              "TimecodeInsertion": "DISABLED",
              "AntiAlias": "ENABLED",
              "Sharpness": 50,
              "CodecSettings": {
                "Codec": "H_264",
                "H264Settings": {
                  "InterlaceMode": "PROGRESSIVE",
                  "NumberReferenceFrames": 3,
                  "Syntax": "DEFAULT",
                  "Softness": 0,
                  "GopClosedCadence": 1,
                  "GopSize": 48,
                  "Slices": 1,
                  "GopBReference": "DISABLED",
                  "SlowPal": "DISABLED",
                  "SpatialAdaptiveQuantization": "ENABLED",
                  "TemporalAdaptiveQuantization": "ENABLED",
                  "FlickerAdaptiveQuantization": "DISABLED",
                  "EntropyEncoding": "CABAC",
                  "Bitrate": 4500000,
                  "FramerateControl": "SPECIFIED",
                  "RateControlMode": "CBR",
                  "CodecProfile": "HIGH",
                  "Telecine": "NONE",
                  "MinIInterval": 0,
                  "AdaptiveQuantization": "HIGH",
                  "CodecLevel": "LEVEL_4_1",
                  "FieldEncoding": "PAFF",
                  "SceneChangeDetect": "ENABLED",
                  "QualityTuningLevel": "SINGLE_PASS_HQ",
                  "FramerateConversionAlgorithm": "DUPLICATE_DROP",
                  "UnregisteredSeiTimecode": "DISABLED",
                  "GopSizeUnits": "FRAMES",
                  "ParControl": "INITIALIZE_FROM_SOURCE",
                  "NumberBFramesBetweenReferenceFrames": 3,
                  "RepeatPps": "DISABLED",
                  "HrdBufferSize": 9000000,
                  "HrdBufferInitialFillPercentage": 90,
                  "FramerateNumerator": 24000,
                  "FramerateDenominator": 1001
                }
              },
              "AfdSignaling": "NONE",
              "DropFrameTimecode": "ENABLED",
              "RespondToAfd": "NONE",
              "ColorMetadata": "INSERT",
              "Width": 1920,
              "Height": 1080
            },
            "AudioDescriptions": [
              {
                "AudioTypeControl": "FOLLOW_INPUT",
                "CodecSettings": {
                  "Codec": "AAC",
                  "AacSettings": {
                    "AudioDescriptionBroadcasterMix": "NORMAL",
                    "Bitrate": 96000,
                    "RateControlMode": "CBR",
                    "CodecProfile": "LC",
                    "CodingMode": "CODING_MODE_2_0",
                    "RawFormat": "NONE",
                    "SampleRate": 48000,
                    "Specification": "MPEG4"
                  }
                },
                "LanguageCodeControl": "FOLLOW_INPUT"
              }
            ],
            "ContainerSettings": {
              "Container": "MP4",
              "Mp4Settings": {
                "CslgAtom": "INCLUDE",
                "FreeSpaceBox": "EXCLUDE",
                "MoovPlacement": "PROGRESSIVE_DOWNLOAD"
              }
            }
          }
        ]
      }
    ],
    "AdAvailOffset": 0,
    "Inputs": [
      {
        "AudioSelectors": {
          "Audio Selector 1": {
            "Tracks": [
              1
            ],
            "Offset": 0,
            "DefaultSelection": "DEFAULT",
            "SelectorType": "TRACK",
            "ProgramSelection": 1
          },
          "Audio Selector 2": {
            "Tracks": [
              2
            ],
            "Offset": 0,
            "DefaultSelection": "NOT_DEFAULT",
            "SelectorType": "TRACK",
            "ProgramSelection": 1
          }
        },
        "VideoSelector": {
          "ColorSpace": "FOLLOW"
        },
        "FilterEnable": "AUTO",
        "PsiControl": "USE_PSI",
        "FilterStrength": 0,
        "DeblockFilter": "DISABLED",
        "DenoiseFilter": "DISABLED",
        "TimecodeSource": "EMBEDDED",
        "FileInput": "s3://amzn-s3-demo-bucket"
      }
    ]
  }
}
```



## Example: ABR output
<a name="HLS-ABR-example"></a>

```
{
  "UserMetadata": {},
  "Role": "ROLE ARN",
  "Settings": {
    "OutputGroups": [
      {
        "Name": "Apple HLS",
        "Outputs": [
          {
            "ContainerSettings": {
              "Container": "M3U8",
              "M3u8Settings": {
                "AudioFramesPerPes": 2,
                "PcrControl": "PCR_EVERY_PES_PACKET",
                "PmtPid": 480,
                "PrivateMetadataPid": 503,
                "ProgramNumber": 1,
                "PatInterval": 100,
                "PmtInterval": 100,
                "VideoPid": 481,
                "AudioPids": [
                  482,
                  483,
                  484,
                  485,
                  486,
                  487,
                  488,
                  489,
                  490,
                  491,
                  492
                ]
              }
            },
            "VideoDescription": {
              "Width": 1920,
              "Height": 1080,
              "VideoPreprocessors": {
                "Deinterlacer": {
                  "Algorithm": "INTERPOLATE",
                  "Mode": "DEINTERLACE"
                }
              },
              "AntiAlias": "ENABLED",
              "Sharpness": 100,
              "CodecSettings": {
                "Codec": "H_264",
                "H264Settings": {
                  "InterlaceMode": "PROGRESSIVE",
                  "ParNumerator": 1,
                  "NumberReferenceFrames": 3,
                  "Softness": 0,
                  "FramerateDenominator": 1001,
                  "GopClosedCadence": 1,
                  "GopSize": 90,
                  "Slices": 1,
                  "HrdBufferSize": 12500000,
                  "ParDenominator": 1,
                  "SpatialAdaptiveQuantization": "ENABLED",
                  "TemporalAdaptiveQuantization": "DISABLED",
                  "FlickerAdaptiveQuantization": "DISABLED",
                  "EntropyEncoding": "CABAC",
                  "Bitrate": 8500000,
                  "FramerateControl": "SPECIFIED",
                  "RateControlMode": "CBR",
                  "CodecProfile": "HIGH",
                  "Telecine": "NONE",
                  "FramerateNumerator": 30000,
                  "MinIInterval": 0,
                  "AdaptiveQuantization": "MEDIUM",
                  "CodecLevel": "LEVEL_4",
                  "SceneChangeDetect": "ENABLED",
                  "QualityTuningLevel": "SINGLE_PASS_HQ",
                  "GopSizeUnits": "FRAMES",
                  "ParControl": "SPECIFIED",
                  "NumberBFramesBetweenReferenceFrames": 3,
                  "HrdBufferInitialFillPercentage": 90,
                  "Syntax": "DEFAULT"
                }
              },
              "AfdSignaling": "NONE",
              "DropFrameTimecode": "ENABLED",
              "RespondToAfd": "NONE",
              "ColorMetadata": "INSERT"
            },
            "AudioDescriptions": [
              {
                "AudioTypeControl": "FOLLOW_INPUT",
                "AudioSourceName": "Audio Selector 1",
                "CodecSettings": {
                  "Codec": "AAC",
                  "AacSettings": {
                    "Bitrate": 128000,
                    "RateControlMode": "CBR",
                    "CodecProfile": "LC",
                    "CodingMode": "CODING_MODE_2_0",
                    "SampleRate": 48000
                  }
                },
                "LanguageCodeControl": "FOLLOW_INPUT"
              }
            ],
            "NameModifier": "_high"
          },
          {
            "VideoDescription": {
              "ScalingBehavior": "DEFAULT",
              "TimecodeInsertion": "DISABLED",
              "AntiAlias": "ENABLED",
              "Sharpness": 50,
              "CodecSettings": {
                "Codec": "H_264",
                "H264Settings": {
                  "InterlaceMode": "PROGRESSIVE",
                  "NumberReferenceFrames": 3,
                  "Syntax": "DEFAULT",
                  "Softness": 0,
                  "GopClosedCadence": 1,
                  "GopSize": 90,
                  "Slices": 1,
                  "GopBReference": "DISABLED",
                  "SlowPal": "DISABLED",
                  "SpatialAdaptiveQuantization": "ENABLED",
                  "TemporalAdaptiveQuantization": "ENABLED",
                  "FlickerAdaptiveQuantization": "DISABLED",
                  "EntropyEncoding": "CABAC",
                  "Bitrate": 7500000,
                  "FramerateControl": "INITIALIZE_FROM_SOURCE",
                  "RateControlMode": "CBR",
                  "CodecProfile": "MAIN",
                  "Telecine": "NONE",
                  "MinIInterval": 0,
                  "AdaptiveQuantization": "HIGH",
                  "CodecLevel": "AUTO",
                  "FieldEncoding": "PAFF",
                  "SceneChangeDetect": "ENABLED",
                  "QualityTuningLevel": "SINGLE_PASS",
                  "FramerateConversionAlgorithm": "DUPLICATE_DROP",
                  "UnregisteredSeiTimecode": "DISABLED",
                  "GopSizeUnits": "FRAMES",
                  "ParControl": "INITIALIZE_FROM_SOURCE",
                  "NumberBFramesBetweenReferenceFrames": 2,
                  "RepeatPps": "DISABLED"
                }
              },
              "AfdSignaling": "NONE",
              "DropFrameTimecode": "ENABLED",
              "RespondToAfd": "NONE",
              "ColorMetadata": "INSERT",
              "Width": 1280,
              "Height": 720
            },
            "AudioDescriptions": [
              {
                "AudioTypeControl": "FOLLOW_INPUT",
                "CodecSettings": {
                  "Codec": "AAC",
                  "AacSettings": {
                    "AudioDescriptionBroadcasterMix": "NORMAL",
                    "Bitrate": 96000,
                    "RateControlMode": "CBR",
                    "CodecProfile": "LC",
                    "CodingMode": "CODING_MODE_2_0",
                    "RawFormat": "NONE",
                    "SampleRate": 48000,
                    "Specification": "MPEG4"
                  }
                },
                "LanguageCodeControl": "FOLLOW_INPUT"
              }
            ],
            "OutputSettings": {
              "HlsSettings": {
                "AudioGroupId": "program_audio",
                "AudioRenditionSets": "program_audio",
                "IFrameOnlyManifest": "EXCLUDE"
              }
            },
            "ContainerSettings": {
              "Container": "M3U8",
              "M3u8Settings": {
                "AudioFramesPerPes": 4,
                "PcrControl": "PCR_EVERY_PES_PACKET",
                "PmtPid": 480,
                "PrivateMetadataPid": 503,
                "ProgramNumber": 1,
                "PatInterval": 0,
                "PmtInterval": 0,
                "Scte35Source": "NONE",
                "Scte35Pid": 500,
                "TimedMetadata": "NONE",
                "TimedMetadataPid": 502,
                "VideoPid": 481,
                "AudioPids": [
                  482,
                  483,
                  484,
                  485,
                  486,
                  487,
                  488,
                  489,
                  490,
                  491,
                  492
                ]
              }
            },
            "NameModifier": "_med"
          },
          {
            "VideoDescription": {
              "ScalingBehavior": "DEFAULT",
              "TimecodeInsertion": "DISABLED",
              "AntiAlias": "ENABLED",
              "Sharpness": 100,
              "CodecSettings": {
                "Codec": "H_264",
                "H264Settings": {
                  "InterlaceMode": "PROGRESSIVE",
                  "NumberReferenceFrames": 3,
                  "Syntax": "DEFAULT",
                  "Softness": 0,
                  "GopClosedCadence": 1,
                  "GopSize": 90,
                  "Slices": 1,
                  "GopBReference": "DISABLED",
                  "SlowPal": "DISABLED",
                  "SpatialAdaptiveQuantization": "ENABLED",
                  "TemporalAdaptiveQuantization": "ENABLED",
                  "FlickerAdaptiveQuantization": "DISABLED",
                  "EntropyEncoding": "CABAC",
                  "Bitrate": 3500000,
                  "FramerateControl": "INITIALIZE_FROM_SOURCE",
                  "RateControlMode": "CBR",
                  "CodecProfile": "MAIN",
                  "Telecine": "NONE",
                  "MinIInterval": 0,
                  "AdaptiveQuantization": "HIGH",
                  "CodecLevel": "LEVEL_3_1",
                  "FieldEncoding": "PAFF",
                  "SceneChangeDetect": "ENABLED",
                  "QualityTuningLevel": "SINGLE_PASS_HQ",
                  "FramerateConversionAlgorithm": "DUPLICATE_DROP",
                  "UnregisteredSeiTimecode": "DISABLED",
                  "GopSizeUnits": "FRAMES",
                  "ParControl": "INITIALIZE_FROM_SOURCE",
                  "NumberBFramesBetweenReferenceFrames": 2,
                  "RepeatPps": "DISABLED"
                }
              },
              "AfdSignaling": "NONE",
              "DropFrameTimecode": "ENABLED",
              "RespondToAfd": "NONE",
              "ColorMetadata": "INSERT",
              "Width": 960,
              "Height": 540
            },
            "AudioDescriptions": [
              {
                "AudioTypeControl": "FOLLOW_INPUT",
                "CodecSettings": {
                  "Codec": "AAC",
                  "AacSettings": {
                    "AudioDescriptionBroadcasterMix": "NORMAL",
                    "Bitrate": 96000,
                    "RateControlMode": "CBR",
                    "CodecProfile": "LC",
                    "CodingMode": "CODING_MODE_2_0",
                    "RawFormat": "NONE",
                    "SampleRate": 48000,
                    "Specification": "MPEG4"
                  }
                },
                "LanguageCodeControl": "FOLLOW_INPUT"
              }
            ],
            "OutputSettings": {
              "HlsSettings": {
                "AudioGroupId": "program_audio",
                "AudioRenditionSets": "program_audio",
                "IFrameOnlyManifest": "EXCLUDE"
              }
            },
            "ContainerSettings": {
              "Container": "M3U8",
              "M3u8Settings": {
                "AudioFramesPerPes": 4,
                "PcrControl": "PCR_EVERY_PES_PACKET",
                "PmtPid": 480,
                "PrivateMetadataPid": 503,
                "ProgramNumber": 1,
                "PatInterval": 0,
                "PmtInterval": 0,
                "Scte35Source": "NONE",
                "Scte35Pid": 500,
                "TimedMetadata": "NONE",
                "TimedMetadataPid": 502,
                "VideoPid": 481,
                "AudioPids": [
                  482,
                  483,
                  484,
                  485,
                  486,
                  487,
                  488,
                  489,
                  490,
                  491,
                  492
                ]
              }
            },
            "NameModifier": "_low"
          }
        ],
        "OutputGroupSettings": {
          "Type": "HLS_GROUP_SETTINGS",
          "HlsGroupSettings": {
            "ManifestDurationFormat": "INTEGER",
            "SegmentLength": 10,
            "TimedMetadataId3Period": 10,
            "CaptionLanguageSetting": "OMIT",
            "Destination": "s3://bucket/hls1/master",
            "TimedMetadataId3Frame": "PRIV",
            "CodecSpecification": "RFC_4281",
            "OutputSelection": "MANIFESTS_AND_SEGMENTS",
            "ProgramDateTimePeriod": 600,
            "MinSegmentLength": 0,
            "DirectoryStructure": "SINGLE_DIRECTORY",
            "ProgramDateTime": "EXCLUDE",
            "SegmentControl": "SEGMENTED_FILES",
            "ManifestCompression": "NONE",
            "ClientCache": "ENABLED",
            "StreamInfResolution": "INCLUDE"
          }
        }
      }
    ],
    "AdAvailOffset": 0,
    "Inputs": [
      {
        "AudioSelectors": {
          "Audio Selector 1": {
            "Tracks": [
              1
            ],
            "Offset": 0,
            "DefaultSelection": "DEFAULT",
            "SelectorType": "TRACK",
            "ProgramSelection": 1
          },
          "Audio Selector 2": {
            "Tracks": [
              2
            ],
            "Offset": 0,
            "DefaultSelection": "NOT_DEFAULT",
            "SelectorType": "TRACK",
            "ProgramSelection": 1
          }
        },
        "VideoSelector": {
          "ColorSpace": "FOLLOW"
        },
        "FilterEnable": "AUTO",
        "PsiControl": "USE_PSI",
        "FilterStrength": 0,
        "DeblockFilter": "DISABLED",
        "DenoiseFilter": "DISABLED",
        "TimecodeSource": "EMBEDDED",
        "FileInput": "s3://INPUT"
      }
    ]
  }
}
```



## Example: Automated ABR
<a name="auto-abr-example"></a>

This example JSON job specification specifies an automated ABR stack in Apple HLS. In addition to specifying the automated ABR settings, it explicitly sets these values:
+ Accelerated transcoding `Mode` to `PREFERRED`
+ `rateControlMode` to `QVBR`
+ `qualityTuningLevel` to `MULTI_PASS_HQ`

For information about the automated ABR settings, see [Automated ABR](auto-abr.md).

```
{
  "UserMetadata": {},
  "Role": "ROLE ARN",
  "Settings": {
    "TimecodeConfig": {
      "Source": "ZEROBASED"
    },
    "OutputGroups": [
      {
        "Name": "Apple HLS",
        "Outputs": [
          {
            "ContainerSettings": {
              "Container": "M3U8",
              "M3u8Settings": {
                "AudioFramesPerPes": 4,
                "PcrControl": "PCR_EVERY_PES_PACKET",
                "PmtPid": 480,
                "PrivateMetadataPid": 503,
                "ProgramNumber": 1,
                "PatInterval": 0,
                "PmtInterval": 0,
                "Scte35Source": "NONE",
                "NielsenId3": "NONE",
                "TimedMetadata": "NONE",
                "VideoPid": 481,
                "AudioPids": [
                  482,
                  483,
                  484,
                  485,
                  486,
                  487,
                  488,
                  489,
                  490,
                  491,
                  492
                ]
              }
            },
            "VideoDescription": {
              "ScalingBehavior": "DEFAULT",
              "TimecodeInsertion": "DISABLED",
              "AntiAlias": "ENABLED",
              "Sharpness": 50,
              "CodecSettings": {
                "Codec": "H_264",
                "H264Settings": {
                  "InterlaceMode": "PROGRESSIVE",
                  "NumberReferenceFrames": 3,
                  "Syntax": "DEFAULT",
                  "Softness": 0,
                  "FramerateDenominator": 1,
                  "GopClosedCadence": 1,
                  "GopSize": 60,
                  "Slices": 2,
                  "GopBReference": "DISABLED",
                  "EntropyEncoding": "CABAC",
                  "FramerateControl": "SPECIFIED",
                  "RateControlMode": "QVBR",
                  "CodecProfile": "MAIN",
                  "Telecine": "NONE",
                  "FramerateNumerator": 30,
                  "MinIInterval": 0,
                  "AdaptiveQuantization": "AUTO",
                  "CodecLevel": "AUTO",
                  "FieldEncoding": "PAFF",
                  "SceneChangeDetect": "ENABLED",
                  "QualityTuningLevel": "MULTI_PASS_HQ",
                  "FramerateConversionAlgorithm": "DUPLICATE_DROP",
                  "UnregisteredSeiTimecode": "DISABLED",
                  "GopSizeUnits": "FRAMES",
                  "ParControl": "INITIALIZE_FROM_SOURCE",
                  "NumberBFramesBetweenReferenceFrames": 2,
                  "RepeatPps": "DISABLED",
                  "DynamicSubGop": "STATIC"
                }
              },
              "AfdSignaling": "NONE",
              "DropFrameTimecode": "ENABLED",
              "RespondToAfd": "NONE",
              "ColorMetadata": "INSERT"
            },
            "OutputSettings": {
              "HlsSettings": {
                "AudioGroupId": "program_audio",
                "AudioRenditionSets": "program_audio",
                "AudioOnlyContainer": "AUTOMATIC",
                "IFrameOnlyManifest": "EXCLUDE"
              }
            },
            "NameModifier": "video"
          },
          {
            "ContainerSettings": {
              "Container": "M3U8",
              "M3u8Settings": {
                "AudioFramesPerPes": 4,
                "PcrControl": "PCR_EVERY_PES_PACKET",
                "PmtPid": 480,
                "PrivateMetadataPid": 503,
                "ProgramNumber": 1,
                "PatInterval": 0,
                "PmtInterval": 0,
                "Scte35Source": "NONE",
                "NielsenId3": "NONE",
                "TimedMetadata": "NONE",
                "TimedMetadataPid": 502,
                "VideoPid": 481,
                "AudioPids": [
                  482,
                  483,
                  484,
                  485,
                  486,
                  487,
                  488,
                  489,
                  490,
                  491,
                  492
                ]
              }
            },
            "AudioDescriptions": [
              {
                "AudioTypeControl": "FOLLOW_INPUT",
                "AudioSourceName": "Audio Selector 1",
                "CodecSettings": {
                  "Codec": "AAC",
                  "AacSettings": {
                    "AudioDescriptionBroadcasterMix": "NORMAL",
                    "Bitrate": 96000,
                    "RateControlMode": "CBR",
                    "CodecProfile": "LC",
                    "CodingMode": "CODING_MODE_2_0",
                    "RawFormat": "NONE",
                    "SampleRate": 48000,
                    "Specification": "MPEG4"
                  }
                },
                "LanguageCodeControl": "FOLLOW_INPUT"
              }
            ],
            "OutputSettings": {
              "HlsSettings": {
                "AudioGroupId": "program_audio",
                "AudioTrackType": "ALTERNATE_AUDIO_AUTO_SELECT_DEFAULT",
                "AudioOnlyContainer": "AUTOMATIC",
                "IFrameOnlyManifest": "EXCLUDE"
              }
            },
            "NameModifier": "audio"
          }
        ],
        "OutputGroupSettings": {
          "Type": "HLS_GROUP_SETTINGS",
          "HlsGroupSettings": {
            "ManifestDurationFormat": "FLOATING_POINT",
            "SegmentLength": 10,
            "TimedMetadataId3Period": 10,
            "CaptionLanguageSetting": "OMIT",
            "Destination": "s3://amzn-s3-demo-bucket1/main",
            "TimedMetadataId3Frame": "PRIV",
            "CodecSpecification": "RFC_4281",
            "OutputSelection": "MANIFESTS_AND_SEGMENTS",
            "ProgramDateTimePeriod": 600,
            "MinSegmentLength": 0,
            "MinFinalSegmentLength": 0,
            "DirectoryStructure": "SINGLE_DIRECTORY",
            "ProgramDateTime": "EXCLUDE",
            "SegmentControl": "SEGMENTED_FILES",
            "ManifestCompression": "NONE",
            "ClientCache": "ENABLED",
            "AudioOnlyHeader": "INCLUDE",
            "StreamInfResolution": "INCLUDE"
          }
        },
        "AutomatedEncodingSettings": {
          "AbrSettings": {
            "MaxRenditions": 6,
            "MaxAbrBitrate": 5000000,
            "MinAbrBitrate": 300000
          }
        }
      }
    ],
    "AdAvailOffset": 0,
    "Inputs": [
      {
        "AudioSelectors": {
          "Audio Selector 1": {
            "Offset": 0,
            "DefaultSelection": "DEFAULT",
            "ProgramSelection": 1
          }
        },
        "VideoSelector": {
          "ColorSpace": "FOLLOW",
          "Rotate": "DEGREE_0",
          "AlphaBehavior": "DISCARD"
        },
        "FilterEnable": "AUTO",
        "PsiControl": "USE_PSI",
        "FilterStrength": 0,
        "DeblockFilter": "DISABLED",
        "DenoiseFilter": "DISABLED",
        "InputScanType": "AUTO",
        "TimecodeSource": "ZEROBASED",
        "FileInput": "s3://amzn-s3-demo-bucket/test.mov"
      }
    ]
  },
  "AccelerationSettings": {
    "Mode": "PREFERRED"
  },
  "StatusUpdateInterval": "SECONDS_60",
  "Priority": 0
}
```



# Specifying input files and input clips
<a name="specifying-inputs"></a>

You can use MediaConvert for *assembly workflows*. An assembly workflow is a MediaConvert job that performs basic input clipping and stitching to assemble output assets from different sources without requiring separate editing software. For example, an assembly workflow can put together a bumper followed by feature content that is interleaved with advertisements. The feature content might have a logo graphic overlay at the beginning of each feature segment.

With these kinds of jobs, you assemble your outputs from multiple inputs by using *input stitching*, or portions of inputs by using *input clipping*. MediaConvert creates all of a job's outputs from this assembly. If you want outputs with different clips of the input files or with different arrangements of the inputs, you must create a separate job for each assembly.

**Topics**
+ [How MediaConvert uses timelines to assemble jobs](#how-mediaconvert-uses-timelines-to-assemble-jobs)
+ [Setting up an assembly workflow job](#setting-up-an-assembly-workflow-job)
+ [Setting up audio tracks and audio selectors](more-about-audio-tracks-selectors.md)
+ [Setting up input captions](including-captions.md)

## How MediaConvert uses timelines to assemble jobs
<a name="how-mediaconvert-uses-timelines-to-assemble-jobs"></a>

MediaConvert assembles inputs and input clips according to *input timelines* and the *output timeline*. The service constructs these timelines based on your settings, and then assembles your inputs into outputs based on them. The following illustration shows three independent input timelines and an output timeline.

![\[Three separate input files are represented with three rectangles. Each is marked with a number line that represents an input timeline. One timeline starts at zero. One timeline shows embedded timecodes. One timeline reflects a specified start setting that starts at one hour. Two of these rectangles have clips inside them, represented with color fill in only parts of the rectangle. One of the rectangles is filled entirely, representing that the entire input file is used in the output. Below the input rectangles is a wider rectangle that represents all the clips and inputs assembled together. This rectangle is marked with a number line that represents the output timeline, which starts at 00:00:00:00.\]](http://docs.aws.amazon.com/mediaconvert/latest/ug/images/assembly.png)


### Input timelines
<a name="input-timelines"></a>

Each input has its own *input timeline*. An input timeline is a series of timecodes that MediaConvert generates to represent each frame of the input file.

By default, the input timeline is the same as any timecodes embedded in the input video. You can specify a different starting timecode in the input setting **Timecode source**. If you use the API or an SDK, you can find this setting in the JSON file of your job. The setting name is `TimecodeSource`, located in `Settings`, `Inputs`. For more information, see [Adjusting the input timeline with the input timecode source](timecode-input.md).

MediaConvert uses the input timeline for the following:
+ Determining when input graphic overlays (inserted images) appear in the video. For more information about the difference between input and output overlays, see [Choosing between input and output overlays](choosing-between-input-overlay-and-output-overlay.md).
+ Determining when motion graphic overlays (inserted images) appear in the video. For more information about the different types of graphic overlays, see [Image insertion](graphic-overlay.md).
+ Synchronizing your video with *sidecar captions* that are in a timecode-based format. Sidecar captions are captions that you provide as input files that are separate from the video.
+ Interpreting the timecodes that you provide when you specify input clips.

### Output timeline
<a name="output-timeline"></a>

The *output timeline* is the series of timecodes that MediaConvert generates to embed in the outputs. MediaConvert also uses the timecodes of the output timeline for features that apply to every output in the job.

By default, the output timeline is the same as any timecodes embedded in the video of your first input file. You can specify a different starting timecode in the job-wide **Timecode configuration** settings under **Job settings**. If you use the API or an SDK, you can find these settings in the JSON file of your job. These settings are under `Settings`, `TimecodeConfig`. For more information, see [Adjusting the output timeline with the job-wide timecode configuration](timecode-jobconfig.md).

MediaConvert uses the output timeline for the following:
+ Determining which timecodes to embed in the output video, when you enable **Timecode insertion** in your output timecode settings.
+ Determining when output overlays (inserted images) appear in the video. For more information about the different types of graphic overlays, see [Image insertion](graphic-overlay.md).
+ Determining how your HLS variant playlists show time.
+ Interpreting the timecode that you provide when you specify a value for **Anchor timecode**.

## Setting up an assembly workflow job
<a name="setting-up-an-assembly-workflow-job"></a>

Follow these steps to set up a job that combines assembly workflow features such as input clipping, input stitching, graphic overlay, and sidecar captions sync. Doing these tasks in this order can make setup easier. In particular, we recommend that you specify your input clips last. This is because each input timeline counts frames from the entire input, not from each individual clip.

This procedure relies on the concept of input and output timelines. For more information, see [How MediaConvert uses timelines to assemble jobs](#how-mediaconvert-uses-timelines-to-assemble-jobs).

**To set up an assembly workflow job (console)**

1. **Specify your video input files.**

   You can have up to 150 inputs in a job. MediaConvert stitches together the inputs in the order that you add them. To use multiple clips from the same input file in chronological order without other inputs in between them, specify the input file only once.

   For full instructions, see [Step 1: Specify input files](setting-up-a-job.md#specify-input-settings).

1. **Set up your audio selectors.**

   In each input, you create audio selectors to map your input audio to your outputs. For instructions, see [Step 2: Create input selectors for video, audio, and captions](setting-up-a-job.md#create-selectors).

   With sidecar audio files, MediaConvert synchronizes audio and video without regard to timecodes. MediaConvert lines up the start of the audio file with the start of the video file.

   Whether your audio is in a sidecar file or embedded in the video, you can adjust its sync using the **Offset** setting in the input audio selector. Use a positive number for **Offset** to move the audio later in the input timeline; use a negative number to move it earlier.

1. **Synchronize any sidecar captions.**

   How you set up your sidecar captions sync depends on the input captions format:
   + If your input captions format is timecode-based (for example, SCC or STL), the service synchronizes the timecode in the captions file with the input timeline.
   + If your input captions format is timestamp-based (for example, SRT, SMI, or TTML), the service synchronizes the captions with the video without regard to timecodes.

**Related information**
   + [About input timecode source and captions alignment](about-input-timecode-source-and-captions-alignment.md)
   + [Adjusting the input timeline with the input timecode source](timecode-input.md)
   + [Captions and captions selectors](including-captions.md) 

1. **Set up when you want any graphic overlays or motion graphic overlays to appear.**

   How you specify the time that the overlay appears depends on what kind of overlay you specify:
   + For input still graphic overlays, specify the overlay in the input where you want the overlay to appear. Specify the start and end times with timecodes that match with that input's timeline.
   + For output still graphic overlays, specify when you want the overlay to appear based on the output timeline.
   + For motion graphic overlays, specify when you want the overlay to appear based on the inputs' timelines.

**Related information**
   + [Adjusting the input timeline with the input timecode source](timecode-input.md)
   + [Adjusting the output timeline with the job-wide timecode configuration](timecode-jobconfig.md)
   + [Image insertion](graphic-overlay.md)

1. **Specify input clips.**

   Unless you want MediaConvert to include the full duration of the input, specify input clips for each input. Specify the start and end times with timecodes that match with that input's timeline.

   Set up input clips as follows:

   1. On the **Create job** page, in the **Job** pane on the left, choose an input.

   1. In the **Input clips** section, choose **Add input clip**.

   1. Enter the starting and ending timecodes for the first clip that you want to include. Use the following 24-hour format with a frame number: HH:MM:SS:FF.

      When you specify an input clip for an audio-only input, the last numbers in the timecode that you enter correspond to hundredths of a second. For example, 00:00:30:75 is the same as 30.75 seconds.

      Make sure that you provide timecodes that align with your input timeline. By default, MediaConvert bases input clipping on timecodes that are embedded in your input video. How you align your timecodes depends on whether your input video has embedded timecodes:
      + If your input doesn't have embedded timecodes, set **Timecode source** to **Start at 0** or **Specified start**.
      + If your input *does* have embedded timecodes and you want MediaConvert to use them, for **Timecode source**, keep the default value, **Embedded**. Specify your clip start and end times accordingly.

        For example, if an input **Timecode source** is set to **Embedded** with video embedded timecodes that start at 01:00:00:00, define the start timecode for a clip 30 seconds in 01:00:30:00 (not 00:00:30:00). By default, the input timeline is the same as the timecodes that are embedded in the video. You can change what determines the input timeline by adjusting the input **Timecode source** setting.
      + Specify an input clip duration that is less than 12 hours long.

      For more information, see [Adjusting the input timeline with the input timecode source](timecode-input.md).

   1. Specify any additional clips. Multiple clips must be in chronological order and can't overlap; each **Start timecode** must come after the previous clip's **End timecode**.

      If you specify more than one input clip, they all appear in the output, one after the other, in the order that you specify them.

# Setting up audio tracks and audio selectors
<a name="more-about-audio-tracks-selectors"></a>

You use audio selectors to associate input audio with output audio. You can set up a single audio selector to represent one or more tracks from the input. After that, you create audio tracks in the output and associate a single audio selector with each output track.

Associations between input audio tracks, audio selectors, and output audio tracks follow these rules:
+  Each input track can be associated with one or more audio selectors 
+  Each audio selector has one or more input tracks 
+  Each output track has one audio selector 

The following illustration shows these relationships. In the illustration, the input file contains three audio tracks. Audio selector 1 selects input track 1. Audio selector 1 is associated with output audio track 1, so track 1 of the output has the same content as track 1 of the input. The second input audio track is not selected by an audio selector, so it isn't used in the output. Audio selector 2 selects input tracks 1 and 3. Audio selector 2 is associated with output audio track 2, so output track 2 contains the channels from input tracks 1 and 3.

![\[Use audio selectors to associate input tracks with output tracks.\]](http://docs.aws.amazon.com/mediaconvert/latest/ug/images/audio-selectors-shared-vsd.png)


For workflows that require channel-level control, use the audio channel remix feature, which supports the following workflows:
+ Changing the order of channels in an audio track
+ Moving audio channels from one or more input tracks to different output tracks
+ Combining the audio from multiple channels into a single channel
+ Splitting the audio from a single channel into multiple channels
+ Adjusting the loudness level of audio channels

# Setting up input captions
<a name="including-captions"></a>

To include captions in your job, follow these steps in the order listed:

1. If your input captions are a timecode-based sidecar captions format, such as SCC or STL, [set the timecode source settings.](#set-the-timecode-source-settings)

1. [Gather required captions information.](#gather-required-captions-information)

1. [Create input captions selectors.](#create-input-caption-selectors)

1. [Set up captions in outputs.](set-up-captions-in-outputs.md)

For a full list of supported input and output captions, see [Captions reference tables](captions-support-tables.md).

For information about how to set up captions in your output, see [Setting up captions in outputs](set-up-captions-in-outputs.md).

**Tip**  
You can use Amazon Transcribe with MediaConvert to generate captions and include them in your output. For more information, see [AWS VOD captioning using Amazon Transcribe](https://github.com/aws-samples/aws-transcribe-captioning-tools) in *AWS Samples* on GitHub.

## Specifying the timecode source
<a name="set-the-timecode-source-settings"></a>

For your captions to correctly synchronize with your video, you must set up your input timeline to match the timecodes embedded in your captions file. MediaConvert establishes the input timeline based on the value you choose for the input **Timecode source** setting. For more information, see [Input timecode source and captions alignment](about-input-timecode-source-and-captions-alignment.md).

For instructions on adjusting the **Timecode source** setting, see [Adjusting the input timeline with the input timecode source](timecode-input.md).

## Gathering required captions information
<a name="gather-required-captions-information"></a>

Before you set up captions in your job, note the following information:
+ The *input captions format*. You must have this information ahead of time; MediaConvert does not read this from your input files.
+ The *tracks* from the input captions that you intend to use in any of your outputs.
+ The *output packages and files* that you intend to create with the job. For information about specifying the output package or file type, see [Creating outputs](output-settings.md).
+ The *output captions format* that you intend to use in each output.

  For supported output captions based on your input container, input captions format, and output container, see [Supported input captions, within video containers](captions-support-tables-by-container-type.md). 
+ The *output captions tracks* that you intend to include for each output. If you pass through teletext-to-teletext, all tracks in the input are available in the output. Otherwise, the tracks that you include in an output might be a subset of the tracks that are available in the input.

## Creating input captions selectors
<a name="create-input-caption-selectors"></a>

When you set up captions, you begin by creating captions selectors. Captions selectors identify a particular captions asset on the input and associate a label with it. The captions asset is either a single track or the set of all tracks contained in the input file, depending on your input captions format. For example, you might add **Captions selector 1** and associate the French captions with it. When you [set up an output to include captions](set-up-captions-in-outputs.md), you do so by specifying captions selectors. 

**To create input captions selectors**

1. On the **Create job** page, in the **Job** pane on the left, choose an input. 
**Note**  
In jobs with multiple inputs, each input must have the same number of captions selectors. For inputs that don't have captions, create empty captions selectors. For these selectors, for **Source**, choose **Null source**. Remove all captions selectors if no inputs have captions.

1. In the **Captions selectors** section, near the bottom of the page, choose **Add captions selector**. 

1. Under **Source**, choose the input captions format. 

1. For most formats, more fields appear. Specify the values for these fields as described in the topic that relates to your input captions format. Choose the appropriate topic from the list that follows this procedure.

1. Create more captions selectors as necessary. The number of captions selectors that you need depends on your input captions format. Choose the appropriate topic from the list that follows this procedure.

# QuickTime captions track or captions in MXF VANC data (ancillary) input captions
<a name="ancillary"></a>

If your input captions are in either of the following formats, the service handles them as "ancillary" data:
+ QuickTime captions track (format QTCC)
+ MXF VANC data

MediaConvert does not create output captions in these formats, but you can convert them to a [supported output format](captions-support-tables-by-container-type.md).

**For ancillary captions**
+ Create one captions selector per track that you will use in your outputs.
+ In each captions selector, for **Source**, choose **Ancillary**.
+ In each captions selector, for **CC channel**, choose the channel number for the track that is associated with the selector.

  For example, the input captions have English in CC channel 1 and Spanish in CC channel 2. To use these captions, create Captions selector 1, and then choose 1 in the **CC channel** dropdown list. Next, create Captions selector 2, and then choose 2 in the **CC channel** dropdown list.

# Embedded (CEA/EIA-608, CEA/EIA-708), embedded\$1SCTE-20, and SCTE-20\$1embedded input captions
<a name="embedded"></a>

If your input captions are in any of the following formats, the service handles them as "embedded":
+ CEA-608
+ EIA-608
+ CEA-708
+ EIA-708

If your input captions have both embedded captions and SCTE-20 captions, and you want both types in your outputs, set up separate input captions selectors for the SCTE-20 and the embedded captions tracks. Set up the SCTE-20 captions selectors the same way that you set up the embedded selectors.

**Note**  
For MXF inputs, your captions are most likely on the ancillary track. Some third-party media analysis tools incorrectly report these captions as 608/708 embedded. For information on setting up ancillary captions, see [QuickTime captions track or captions in MXF VANC data (ancillary) input captions](ancillary.md).

## Number of captions selectors for embedded captions
<a name="embedded-how-many-caption-selectors"></a>
+ If all of your output captions are also an embedded format, create only one captions selector, even if you want to include multiple tracks in the output. With this setup, MediaConvert automatically extracts all tracks and includes them in the output.
+ If all of your outputs are in a format that is not embedded, create one captions selector for each track that you want to include in the output.
+ If some of your outputs have captions in an embedded format and some of your outputs have captions in a different format, create one captions selector for the outputs with embedded captions. Also create individual selectors for the outputs with other captions that aren't embedded, one for each track that you want in your outputs.

## Captions selector fields for embedded captions
<a name="embedded-caption-selector-fields"></a>

**Source**: Choose **Embedded**

**CC channel number**: This field specifies the track to extract. Complete as follows: 
+ If you are doing embedded-to-embedded captions (that is, you create only one captions selector for the input embedded captions), MediaConvert ignores this field, so keep the default value for **CC channel number**.
+ If you are converting embedded captions to another format, (that is, you create several captions selectors, one for each track), specify the captions channel number from the input that holds the track that you want. To do that, select the channel number from the dropdown list. For example, select **1** to choose CC1.

**Note**  
MediaConvert doesn't automatically detect which language is in each channel. You can specify that when you set up the output captions, so that MediaConvert passes the language code metadata for the captions channel into the output for downstream use.



# DVB-Sub input captions
<a name="dvb-sub-or-scte-27"></a>

MediaConvert supports DVB-Sub only in TS inputs.

In most cases, create one captions selector per track. In each selector, specify which track you want by providing the PID or language code.

**Note**  
Don't specify the captions in both the **PID** field and the **Language** dropdown list. Specify one or the other. 

If you are doing DVB-sub-to-DVB-sub and you want to pass through all the captions tracks from the input to the output, create one captions selector for all tracks. In this case, keep the **PID** field blank and don't choose any language from the **Language** dropdown list.

# Teletext input captions
<a name="teletext"></a>

How you set up your Teletext input captions selectors depends on how you plan to use the captions in your output. You can use Teletext captions in one of the following ways:
+ [Teletext to Teletext passthrough](#input-teletext-to-output-teletext-passthrough)

  With Teletext passthrough, MediaConvert passes through your input captions unchanged from the input to the output. Captions styling, Teletext page numbers, and non-caption Teletext data appear in your outputs exactly the same as in the input.

  Teletext passthrough is the only way to include Teletext data that isn't captions in your output.
+ [Teletext to Teletext, page remapping](#input-teletext-to-output-teletext-with-page-remapping)

  If you want the Teletext page numbers on your output to differ from the page numbers on the input, you can remap the content. When you do this, your output captions have plain styling and you lose any Teletext data that isn't captions.
+ [Teletext to other captions formats](#input-teletext-to-other-format-output-captions)

  You can use Teletext input captions to generate output captions in some other formats. To look up which captions you can generate from Teletext inputs, see [Captions reference tables](captions-support-tables.md).

For information on setting up captions for each of these workflows, see the following topics.

## Teletext to Teletext passthrough
<a name="input-teletext-to-output-teletext-passthrough"></a>

When you're doing Teletext to Teletext passthrough, create one input captions selector for the whole set of input captions. Don't specify a value for **Page number**.

For information about setting up the output of this captions workflow, see [Teletext to Teletext passthrough](teletext-output-captions.md#teletext-to-teletext-passthrough).

## Teletext to Teletext, page remapping
<a name="input-teletext-to-output-teletext-with-page-remapping"></a>

When the captions format for both your input and output captions is Teletext, and you want your output Teletext page numbers to be different from the input page numbers, create a separate input captions selector for each Teletext page of your input. Specify the input Teletext page number for **Page number**.

For information about setting up the output of this captions workflow, see [Teletext to Teletext, page remapping](teletext-output-captions.md#teletext-to-teletext-page-remapping).

## Teletext to other captions formats
<a name="input-teletext-to-other-format-output-captions"></a>

When your input captions are Teletext and your output captions are another format, set up one input captions selector for each input Teletext page. Specify the input Teletext page number for **Page number**.

For information about setting up the output of this captions workflow, see the section on your output format in [Setting up captions in outputs](set-up-captions-in-outputs.md).

# IMSC, SCC, SMPTE-TT, SRT, STL, TTML (sidecar) input captions
<a name="sidecar-input"></a>

IMSC, SCC, SMPTE-TT, SRT, STL, and TTML are sidecar captions formats. With these formats, you provide input captions as a separate file. Depending on your output captions settings, AWS Elemental MediaConvert passes them through to the output in the same format or converts them into another sidecar format.

**All sidecar captions**  
In all cases, create one captions selector for each input captions file.

In **Source file**, enter the URI to the captions input file that is stored in Amazon S3 or on an HTTP(S) server. For Amazon S3 inputs, you can specify the URI directly or choose **Browse** to select from your Amazon S3 buckets. For HTTP(S) inputs, provide the URL to your input video file. For more information, see [HTTP input requirements](http-input-requirements.md). 

**IMSC captions**  
MediaConvert supports IMSC as an input captions format either as a sidecar file or as part of an IMF source. If your input IMSC captions are part of an IMF package, see [IMSC input captions (as part of an IMF source)](IMSC-in-MXF.md). For restrictions on IMSC support, see [IMSC requirements](imsc-captions-support.md).

**SMPTE-TT captions**  
You can use SMPTE-TT input captions that are text-only, that have captions images included in the captions file with base64 encoding (`smpte:image encoding="Base64"`), and that use external references to captions images (`smpte:backgroundImage`).

When your captions use external references to images, those images must be located in the same Amazon S3 bucket and folder as your captions file. For example, say this is the Amazon S3 path to your SMPTE\$1TT file: `amzn-s3-demo-bucket/mediaconvert-input/captions/my-captions-spanish.ttml`. Then you must store the image files that the captions file references here: `s3://amzn-s3-demo-bucket/mediaconvert-input/captions/`.

**SRT captions**  
MediaConvert supports SRT input captions with UTF-8 character encoding.

**Synchronizing sidecar captions and video**  
To make sure that your captions are properly synchronized with your video, check that the value for **Timecode source** in the **Video selector** section matches the timecodes in your captions file. For example, if your video has embedded timecodes starting at 01:00:00:00, but the timecodes in your captions file start at zero, change the default value for the video selector **Timecode source** from **Embedded** to **Start at 0**. If other aspects of your job prevent that, use the **Time delta** setting to adjust your captions, as described in [Use cases for time delta](time-delta-use-cases.md).

**Note**  
MediaConvert handles the alignment of captions with video differently depending on whether the caption format is timecode-based or timestamp-based. For more information, see [Input timecode source and captions alignment](about-input-timecode-source-and-captions-alignment.md).

Enter a positive or negative number in **Time delta** to modify the time values in the captions file. By default, time delta is measured in seconds. For example, enter **15** to add 15 seconds to all the time values in the captions file. Or, enter **-5** to subtract 5 seconds from the time values in the captions file. To specify in milliseconds instead, set **Time delta units** to **Milliseconds**.

If the value you enter for **Time delta** would result in captions before or after your video, those captions will not be present in your output.

**Note**  
When converting from SCC to SRT, MediaConvert first rounds the value you set for **Time delta** to the nearest input frame. MediaConvert uses this rounded value when calculating output SRT timings.

**Topics**
+ [Input timecode source and captions alignment](about-input-timecode-source-and-captions-alignment.md)
+ [Use cases for time delta](time-delta-use-cases.md)
+ [Converting dual SCC input files to embedded captions](converting-dual-scc-input-files-to-embedded-captions.md)
+ [TTML style formatting](ttml-style-formatting.md)

# Input timecode source and captions alignment
<a name="about-input-timecode-source-and-captions-alignment"></a>

When you adjust your input timeline by setting the input **Timecode source** to **Start at 0** or **Specified start**, MediaConvert behaves as though your input has embedded timecodes that start when you specify. But MediaConvert doesn't change the timecodes or timestamps in your sidecar captions files. Therefore, the way that you align your captions depends on your captions format.

**Timecode-based sidecar formats (SCC, STL)**  
Some captions formats, including SCC and STL, define where captions are placed in the video by timecode. With these formats, MediaConvert places each caption on the frames specified in the captions file, according to each frame's timecode in the input timeline. To adjust your captions to start at a different time than that, use the **Time delta** setting. For more information, see [Use cases for time delta](time-delta-use-cases.md).

MediaConvert establishes the input timeline based on the value you choose for the input **Timecode source** setting.

For example, if your SCC file specifies that the first caption should appear at 00:05:23:00 and you set **Timecode source** to **Specified start** and **Start timecode** to 00:04:00:00, the first caption will appear in your output one minute and twenty-three seconds into the video. If you set **Timecode source** to **Specified start** and **Start timecode** to 01:00:00:00, you won't see captions when you expect, because 00:05:23:00 occurs before the start of your video, according to the input timeline.

**Timestamp-based sidecar formats (SRT, SMI, TTML)**  
Some captions formats, including SRT, SMI, and TTML, allow for definition of where captions are placed in the video by timestamp. With these, MediaConvert measures the placement of the captions by the distance, in time, from the start of the video. This is true regardless of whether the captions file specifies placement with timecode or timestamp.

Therefore, your captions appear at the time specified in the captions file without regard to the video timecodes. For example, if your SRT file specifies that the first caption should appear at 00:05:23:00 or at 00:05:23,000 and you set **Timecode source** to **Specified start** and **Start timecode** to 00:04:00:00, the first caption will still appear in your output five minutes and twenty-three seconds into the video.

To adjust your captions to start at a different time than that, use the **Time delta** setting. For more information, see [Use cases for time delta](time-delta-use-cases.md).

**Formats that embed captions in the video stream (CEA/EIA-608, CEA/EIA-708)**  
Some captions formats embed the captions directly in the video frame or the video frame metadata. With these, MediaConvert keeps the captions with the frames that they are embedded in, regardless of the timecode settings.

# Use cases for time delta
<a name="time-delta-use-cases"></a>

How you use **Time delta (TimeDelta)** depends on the problem you're trying to solve and the captions format that you're working with.

 By default, you specify the time delta in seconds. If you want to specify it in milliseconds instead, set **Time delta units (TimeDeltaUnits)** to **Milliseconds (MILLISECONDS)**.

## Adjusting for different timecodes between video and captions files
<a name="adjusting-for-different-timecodes-between-video-and-captions-files"></a>

With timecode-based captions formats, such as SCC and STL, the timecodes in the captions might be relative to a starting timecode that is different from the starting timecode embedded in the video. You use **Time delta** to adjust for the difference.

**Example problem:** Your video file might have embedded timecodes that start at 00:05:00:00. The first instance of dialogue that requires captions might be one minute into the video, at timecode 00:06:00:00. Your captions file might be written on the assumption that your video timecodes start at 00:00:00:00, with the first caption starting at 00:01:00:00. If you don't use **Time delta**, MediaConvert would not include this first caption because it occurs before the start of the video. 

**Solution:** Add five minutes to the captions. Enter **300** for **Time delta**.

## Adjusting captions after synchronizing video and audio
<a name="adjusting-captions-after-sychronizing-video-and-audio"></a>

Your timecode-based (SCC or STL) captions might be aligned with the timecodes that are embedded in your video, but you might need to use the input **Timecode source** setting to align your audio. This creates a difference between the video and captions, which you need to adjust for. You don't need to make this adjustment with timestamp-based captions formats, such as SRT, SMI, and TTML.

For more information about aligning captions when you use input **Timecode source**, see [Input timecode source and captions alignment](about-input-timecode-source-and-captions-alignment.md).

**Example problem:** Your video file might have embedded timecodes that start at 00:05:00:00 and the first instance of dialogue that requires captions might be one minute into the video, at timecode 00:06:00:00. Your captions file is written to sync correctly, with the first caption starting at 00:06:00:00. But you need to change your embedded captions in your input to sync correctly with your audio file. So you set the input **Timecode source** to **Start at Zero**. If you don't use **Time delta**, MediaConvert would put the first caption in your output at six minutes into the video.

**Solution:** Subtract five minutes from the captions. Enter **-300** for **Time delta**.

## Correcting slight errors in captions sync
<a name="correcting-slight-errors-in-captions-sync"></a>

With any type of sidecar format, there might be a small error in your input captions file, so that the captions are consistently a little late or a little early.

**Example problem:** Your video has embedded captions that start at zero. The first instance of dialogue that requires captions is at 00:06:15:00, but the captions appear on the screen three seconds late, at 00:06:18:00.

**Solution:** Subtract three seconds from the captions. Enter **-3** for **Time delta**.

# Converting dual SCC input files to embedded captions
<a name="converting-dual-scc-input-files-to-embedded-captions"></a>

If you want to use two SCC files as your captions input and embed the captions as two output captions channels embedded in your output video stream, set up your captions according to this procedure.

**To convert dual SCC to embedded captions**

1. Set up two input captions selectors. Follow the procedure in [Creating input captions selectors](including-captions.md#create-input-caption-selectors). Specify values as follows:
   + In each captions selector, choose **SCC** for **Source**.
   + For **Source file**, choose one of your input SCC files in each selector.
   + If you want both 608 and 708 captions embedded in your outputs, choose **Upconvert** for **Force 608 to 708 upconvert** in both captions selectors.

1. Set up captions in your outputs. Follow the procedure in [Setting up captions in outputs](set-up-captions-in-outputs.md). Follow these specific choices:
   + Specify the captions in the same output as the video that you want the captions embedded in.
   + Choose **Add captions** twice, to create **Captions 1** and **Captions 2** tabs in the **Encoding settings** section.
   + For **Captions source**, in each of the captions tabs, choose one of the captions selectors that you created in the preceding step of this procedure.
   + For **CC channel number**, choose a number for each of the captions tabs that don't share a field. For example, in **Captions 1**, choose **1** for **CC channel number** and in **Captions 2**, choose **3** for **CC channel number**.

     Don't choose the combinations 1 and 2 or 3 and 4, because those pairs of channels share the same field.
   + If you chose **Upconvert** in the preceding step of this procedure, optionally specify a service number for **708 service number**. Within an output, each captions tab must specify a different service number.

     If you upconvert and don't specify a value for **708 service number**, the service uses the value that you specify for **CC channel number** as your 708 service number.

# TTML style formatting
<a name="ttml-style-formatting"></a>

AWS Elemental MediaConvert reads the style formatting of your input captions when your job runs. If you notice issues with the style formatting of your output, we recommend checking the formatting of your input captions or setting **Style passthrough** to Enabled. The following topics provide guidance for using fonts, heritable and non-heritable attributes, and right to left languages in your TTML input captions.

**Specifying fonts** 

MediaConvert supports the following generic font families listed in the [TTML2 W3C recommendation](https://www.w3.org/TR/ttml2/#style-value-generic-family-name): 
+ default
+ monospace
+ sansSerif
+ serif
+ monospaceSansSerif
+ monospaceSerif
+ proportionalSansSerif
+ proportionalSerif

For the best results, specify a generic font family within your TTML input captions. If you specify an individual font instead, MediaConvert will map the font to one of the generic font families listed above.

**Heritable and non-heritable attributes** 

Style attributes are either heritable or non-heritable. The [TTML 2 W3C recommendation](https://www.w3.org/TR/ttml2/#styling-attribute-vocabulary) lists these under *inherited* for each style attribute.

Include non-heritable style attributes in every element that you want them to apply to.

For example, `tts:backgroundColor` is a non-heritable style attribute. The following results in *hello* with a red background color and *world* with no background color: 

`<span tts:backgroundColor="red">hello<br/>world</span>` 

You can fix the above formatting so that *hello world* both have a red background color by using individual spans, each with their own style attributes, like in this example: 

`<span><span tts:backgroundColor="red">hello</span> <br/> <span tts:backgroundColor="red">world</span></span>` 

**Right to left languages** 

MediaConvert supports both left to right and right to left text directions within TTML. 

When you don’t specify text direction, MediaConvert uses left to right. 

To specify right to left, include a `tts:direction="rtl"` attribute. If your text has a mix of bidirectional characters, also include a `tts:unicodeBidi="embed"` attribute as described in the [TTML2 W3C recommendation](https://www.w3.org/TR/ttml2/#style-attribute-direction). Note that `tts:unicodeBidi` is a non-heritable attribute.

# IMSC input captions (as part of an IMF source)
<a name="IMSC-in-MXF"></a>

AWS Elemental MediaConvert supports IMSC as an input captions format either as a sidecar file or as part of an IMF source. If your input IMSC captions are in a sidecar file, see [IMSC, SCC, SMPTE-TT, SRT, STL, TTML (sidecar) input captions](sidecar-input.md).

When your input IMSC captions are part of an IMF source, you don't specify the source file for IMSC captions. That information is in the CPL file that you specify for your job input. For restrictions on IMSC support, see [IMSC requirements](imsc-captions-support.md).

**Number of captions selectors for IMSC**  
Create one captions selector per track.

**Track number**  
Specify which captions you want by providing a track number. The track numbers correspond to the order that the tracks appear in the CPL file. For example, if your CPL file lists your French captions first, set **Track number** to **1** to specify the French captions.

**In your JSON job specification**  
If you use the API or an SDK, you can find these settings in the JSON file of your job. These settings are under `Inputs`, as in the following example:

```
"Inputs": [

 
      {
        ...
        		
        "CaptionSelectors": {
          "Captions Selector 1": {
            "SourceSettings": {
              "SourceType": "IMSC",
              "TrackSourceSettings": {
                "TrackNumber": 1
              }
            }
          },

          "Captions Selector 2": {
            "SourceSettings": {
              "SourceType": "IMSC",
              "TrackSourceSettings": {
                "TrackNumber": 4
              }
            }
          },
          ...
```

# WebVTT input captions (as part of an HLS source)
<a name="WebVTT-in-HLS"></a>

AWS Elemental MediaConvert supports WebVTT as an input captions format either as a sidecar file or as part of an HLS source. If your input WebVTT captions are in a sidecar file, see [IMSC, SCC, SMPTE-TT, SRT, STL, TTML (sidecar) input captions](sidecar-input.md).

When your input WebVTT captions are part of a HLS source, you don't need to specify the source WebVTT manifest file for the WebVTT captions. That information is in the main HLS input file that you specify in your job input. You will need to enable the **Use HLS Rendition Group** and use the following settings.

**Number of captions selectors for WebVTT**  
Create one captions selector per WebVTT source.

**Rendition Group Id**  
Specify which captions group you want by providing a group id. The group id corresponds to EXT-X-MEDIA, GROUP-ID tag in your HLS manifest. For example, if your HLS manifest file lists your French captions in a specific group "subs", set **Rendition Group ID** to **subs** to specify the French captions group id.

**Rendition Name**  
Specify which captions group you want by providing a rendition name. The rendition name corresponds to EXT-X-MEDIA, NAME tag in your HLS manifest. For example, if your HLS manifest file lists your French captions in a rendition name called "French", set **Rendition Name** to **French** to specify the French captions rendition name.

**Rendition Language Code**  
Specify which captions group you want by providing an ISO 639-3 language code. The language corresponds to EXT-X-MEDIA, LANGUAGE tag in your HLS manifest. For example, if your HLS manifest file lists your French captions in a language code of "FRA", set **Rendition Language Code** to **FRA** to specify the French captions rendition language code.

**In your JSON job specification**  
If you use the API or an SDK, you can find these settings in the JSON file of your job. These settings are under `Inputs`, as in the following example:

```
"Inputs": [

 
      {
        ...
        		
"CaptionSelectors": {
  "Caption Selector 1": {
    "SourceSettings": {
      "SourceType": "WebVTT",
      "WebvttHlsSourceSettings": {
        "RenditionGroupId": "subs",
        "RenditionName": "French",
        "RenditionLanguageCode": "FRA"
      }
    }
  }
}
          ...
```

# Creating outputs
<a name="output-settings"></a>

A single MediaConvert job can create outputs as a standalone file (for example, an .mp4 file), a set of files for adaptive bitrate (ABR) streaming (for example, an Apple HLS package), or combinations of both. When you create output groups and the outputs within them, you specify the number and types of files that your job generates.

When your MediaConvert job is complete, you can use Amazon CloudFront, or another content distribution network (CDN), to deliver your streaming package. The CDN gets your video to the people who want to view it. For more information, see [Delivering video on demand (VOD) with CloudFront](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/on-demand-video.html).

The topics in this section explain the relationship between MediaConvert output groups, MediaConvert outputs, and the actual output files that MediaConvert delivers to you. 

**Topics**
+ [Setting up captions in outputs](set-up-captions-in-outputs.md)
+ [Using output groups to specify a streaming package type or standalone file](outputs-file-ABR.md)
+ [Choosing your ABR streaming output groups](choosing-your-streaming-output-groups.md)
+ [Recommended encoding settings for video quality](video-quality.md)
+ [Using variables in your job settings](using-variables-in-your-job-settings.md)

# Setting up captions in outputs
<a name="set-up-captions-in-outputs"></a>

The location of the captions in a job depends on your output captions format: Your captions might be in the same output as your video, a separate output in the same output group as your video, or in an entirely separate output group. How you set up multiple captions tracks also depends on the output captions format. 

For a full list of supported input and output captions, see [Captions reference tables](captions-support-tables.md).

For information about how to set up captions in your input, see [Setting up input captions](including-captions.md).

The following procedure shows how to set up captions for different outputs.

**To set up captions for different outputs**

1. Open the MediaConvert console at [https://console.aws.amazon.com/mediaconvert](https://console.aws.amazon.com/mediaconvert).

1. Choose **Create job**.

1. Set up your input, output groups, and outputs for video and audio, as described in [Tutorial: Configuring job settings](setting-up-a-job.md) and [Creating outputs](output-settings.md).

1. Create input captions selectors as described in [Creating input captions selectors](including-captions.md#create-input-caption-selectors).

1. Determine where in your job to specify the captions. This choice depends on the output captions format. Consult the relevant topic below to look this up.

1. In the left pane of the **Create job** page, choose the appropriate output from the list of outputs.

1. Under **Encoding settings**, choose **Add caption**. This displays a captions settings area under **Encoding settings**. 

1. If your output captions format requires a separate group of captions settings for each track in the output, choose **Add captions** again until you have one captions group for each track. To determine whether you need one captions settings group for all tracks or one for each track, see the relevant topic below.

1. Under **Encoding settings**, choose **Captions 1** from the list.

1. Under **Captions source**, choose a captions selector. This selects the track or tracks that you associated with the selector when you set up your input, so that AWS Elemental MediaConvert will include those captions in this output.

1. Under **Destination type**, choose an output captions format. Check [Supported input captions, within video containers](captions-support-tables-by-container-type.md) to ensure that you are choosing a supported format.

1. Provide values for any additional fields as described in the relevant topic below.



**Topics**
+ [CEA/EIA-608 and CEA/EIA-708 (embedded) output captions](embedded-output-captions.md)
+ [DVB-Sub output captions](dvb-sub-output-captions.md)
+ [IMSC, TTML, and WebVTT (sidecar) output captions](ttml-and-webvtt-output-captions.md)
+ [SCC, SRT, and SMI (sidecar) output captions](scc-srt-output-captions.md)
+ [Teletext output captions](teletext-output-captions.md)
+ [Burn-in output captions](burn-in-output-captions.md)
+ [Settings for accessibility captions](accessibility-captions.md)

# CEA/EIA-608 and CEA/EIA-708 (embedded) output captions
<a name="embedded-output-captions"></a>

 This section covers how to configure embedded output captions in AWS Elemental MediaConvert. The main topics include:
+ Where to specify the captions.
+ How to specify multiple captions tracks.
+ Embedded and ancillary captions in MXF outputs.

## Where to specify the captions
<a name="where-embedded-output-captions"></a>

Put your captions in the same output group and the same output as your video.

## How to specify multiple captions tracks
<a name="multilang-embedded-output-captions"></a>
+ If your input captions format is embedded (that is, you are passing through embedded-to-embedded), you need to create only one group of captions settings. The captions selector that you choose under **Captions source** includes all tracks from the input.
+ If your input captions are two SCC files, you can create output captions as two output captions channels that are embedded in your output video stream. For more information, see [Converting dual SCC input files to embedded captions](converting-dual-scc-input-files-to-embedded-captions.md).
+ If your input captions are not embedded or SCC, you can include only one captions track per output. In each output, include one group of captions settings. Under **Captions source**, choose the selector that is set up for the track that you want to include.

## Embedded and ancillary captions in MXF outputs
<a name="embedded-and-ancillary-captions-in-mxf-outputs"></a>

Whether your MXF output can contain ancillary captions depends on the MXF profile:
+ MXF XDCAM HD: This MXF profile specifies ancillary data in the smpte 436 track. With these outputs, MediaConvert copies your embedded captions to the smpte 436 ancillary track in addition to including it in the video stream.
+ MXF D-10: This MXF profile specification doesn't allow for ancillary data. Therefore, your MXF D-10 outputs only have captions embedded in the video stream.

MediaConvert determines an output's MXF profile based on the values for the following encoding settings:
+ Resolution
+ Frame rate
+ Video codec profile
+ Interlace mode

For information about which values for these settings are valid for which MXF profile, see the relevant specifications. For XDCAM HD, see [RDD 9:2009 - SMPTE Standard Doc - MXF Interoperability Specification of Sony MPEG Long GOP Products](https://ieeexplore.ieee.org/document/7290306) in the IEEE Xplore Digital Library. For MXF D-10, see [ST 356:2001 - SMPTE Standard - For Television — Type D-10 Stream Specifications — MPEG-2 4:2:2P @ ML for 525/60 and 625/50](https://ieeexplore.ieee.org/document/7290684).

# DVB-Sub output captions
<a name="dvb-sub-output-captions"></a>

 This section covers how to configure DVB-Sub output captions in AWS Elemental MediaConvert. The main topics include:
+ Where to specify the captions.
+ How to specify multiple captions tracks.
+ How to specify the font script.

## Where to specify the captions
<a name="where-dvb-sub-output-captions"></a>

Put your captions in the same output group and the same output as your video.

## How to specify multiple captions tracks
<a name="multilang-dvb-sub-output-captions"></a>
+ If your input captions are the same format as your output captions (passthrough), you need to create only one group of captions settings. The captions selector that you choose under **Captions source** includes all tracks from the input.
+ If your input captions are in a different format, create one group of captions settings for each track. Put each group of captions settings in the same output. They will appear in the list of settings groups as Captions 1, Captions 2, and so forth. In each group of settings, choose the captions selector under **Captions source** that is set up for the track that you want to include.

## How to specify the font script
<a name="how-to-specify-lang-script-dvb-sub"></a>

AWS Elemental MediaConvert automatically selects the appropriate script for your captions, based on the language that you specify in the output captions settings. If the language that you choose has more than one possible script, specify the script that you want.

**To ensure that the service uses the correct font script**

1. In the **Captions** section under **Encoding settings**, for **Language**, choose the language of the captions text.

1. If the language that you specify has more than one possible script, use **Font script** to specify the script.

   For example, if you choose **Chinese** (ZH) for **Language**, use **Font script** to choose either **Simplified Chinese** or **Traditional Chinese**. In this case, if you don’t specify a value for **Font script**, the service defaults to simplified Chinese. 
**Tip**  
In most cases, for **Font script** you can keep the default value of **Automatic**. When you do, the service chooses the script based on the language of the captions text.

# IMSC, TTML, and WebVTT (sidecar) output captions
<a name="ttml-and-webvtt-output-captions"></a>

 This section covers how to configure IMSC, TTML, and WebVTT (sidecar) output captions in AWS Elemental MediaConvert. The main topics include:
+ Where to specify the captions.
+ How to specify multiple captions tracks.
+ Sidecar captions container options.

If your output captions are IMSC, TTML or WebVTT format, set them up in your outputs according to the following information. For restrictions on IMSC support, see [IMSC requirements](imsc-captions-support.md).

## Where to specify the captions
<a name="where-ttml-and-webvtt-output-captions"></a>

Put your captions in the same output group, but a different output from your video.

After you add captions to an output, delete the **Video** and **Audio 1** groups of settings that the service automatically created with the output.

**To delete the Video and Audio 1 groups of settings**

1. On the **Create job** page, in the **Job** pane on the left, under **Output groups**, choose the output that contains the groups of settings that you want to delete.

1. The **Video** group of settings is automatically displayed in the **Stream settings** section. Choose the **Remove video selector** button.

1. The **Audio 1** group of settings is automatically displayed in the **Stream settings** section. Choose the **Remove** button.

## How to specify multiple captions tracks
<a name="multilang-ttml-and-webvtt-output-captions"></a>

Put each captions track in its own output.

**Note**  
The captions track that you specify first in your job is signaled as the default track in the HLS manifest.

## Sidecar captions container options
<a name="sidecar-captions-container-options"></a>

Depending on your output group, you can choose the captions container for IMSC and TTML captions outputs.

For **DASH ISO** output groups, you can choose from these:
+ Fragmented MP4 (`.fmp4`)
+ Raw (`.xml` for IMSC, `.ttml` for TTML)

For all other output groups, IMSC and TTML files are raw.

**To specify the captions container for IMSC and TTML captions in DASH ISO output groups**

1. Set up the outputs in your **DASH ISO** output group as described in [Creating outputs in ABR streaming output groups](setting-up-a-job.md#create-outputs-in-abr-streaming-output-groups). Put captions in a separate output.

1. On the **Create job** page, in the **Job** pane on the left, choose the captions output.

1. In the **Output settings** section on the right, choose **Container settings**, and then enable **DASH container settings**.

1. For **Captions container**, keep the default **Raw** or choose **Fragmented MPEG-4**.

# SCC, SRT, and SMI (sidecar) output captions
<a name="scc-srt-output-captions"></a>

 This section covers how to configure SCC, SRT, and SMI (sidecar) output captions in AWS Elemental MediaConvert. The main topics include:
+ Where to specify the captions.
+ How to specify multiple captions tracks.

## Where to specify the captions
<a name="where-scc-srt-output-captions"></a>

Put your captions in the same output group, but a different output from your video.

After you add captions to an output, delete the **Video** and **Audio 1** groups of settings that the service automatically created with the output.

**To delete the Video and Audio 1 groups of settings**

1. On the **Create job** page, in the **Job** pane on the left, under **Output groups**, choose the output that contains the groups of settings that you want to delete.

1. The **Video** group of settings is automatically displayed in the **Stream settings** section. Choose the **Remove video selector** button.

1. The **Audio 1** group of settings is automatically displayed in the **Stream settings** section. Choose the **Remove** button.

## How to specify multiple captions tracks
<a name="multilang-scc-srt-output-captions"></a>

 For each SRT, SCC or SMI output you must have one output per caption selector. In the caption output, choose the captions selector under **Captions source** that is set up for the track that you want to include. They will appear in the list of settings groups as **Captions Selector 1**, **Captions Selector 2**, and so forth. 

# Teletext output captions
<a name="teletext-output-captions"></a>

 This section covers how to configure teletext output captions in AWS Elemental MediaConvert. The main topics include:
+ Teletext to Teletext passthrough.
+ Teletext to Teletext, page remapping.
+ Teletext from other captions formats.

How you set up your output Teletext captions depends on whether you want to move the captions to different Teletext pages or to just pass through your captions exactly from the input to the output.

## Teletext to Teletext passthrough
<a name="teletext-to-teletext-passthrough"></a>

When your input captions format is Teletext and you want your output captions to be on the same pages, with the same styling, as the input, then you can pass through the input captions to your output. To do so, set up your captions this way:
+ Make sure that your input captions are set up with one captions selector. For more information, see [Teletext input captions](teletext.md).
+ In the same output group and same output as your video, create one captions tab. This one captions tab represents all of your output captions, regardless of the number of output Teletext pages you have.
+ In your output captions tab, choose your input captions selector for **Captions source**.
+ Don't specify values for any other settings on the output captions tab.

When you work directly in your JSON job specification, one captions tab corresponds to one child of `CaptionDescriptions`.

## Teletext to Teletext, page remapping
<a name="teletext-to-teletext-page-remapping"></a>

When your input captions format is Teletext and, in your output, you want to change the Teletext pages that your captions are on, you specify the pages in the input and output. To do so, set up your captions this way:
+ Make sure that your input captions are set up with one captions selector for each Teletext page and that you specify the page number in the settings for each input captions selector. For more information, see [Teletext input captions](teletext.md).
+ In the same output group and same output as your video, create one captions tab for each output Teletext page.
+ In each output captions tab, choose one of your input captions selectors for **Captions source**.
+ In each output captions tab, for **Page number**, specify the Teletext page number that you want for those captions in your output. Optionally, provide values for **Language**, **Description**, and **Page types**.

## Teletext from other captions formats
<a name="teletext-from-other-captions-formats"></a>

When your input captions are in a format other than Teletext, you must specify the Teletext pages for your output captions. MediaConvert supports these captions workflows:
+ A single input captions track to a single output Teletext page.
+ A single input captions track to multiple output Teletext pages. Each output page duplicates the contents of the others.
+ Multiple input captions tracks to multiple output Teletext pages. You use captions selectors to specify which captions to include on each output Teletext page.

Set up your captions like this:
+ Make sure that your input captions are set up with one captions selector for each captions track that you intend to map to a Teletext page. For more information, see [Creating input captions selectors](including-captions.md#create-input-caption-selectors).
+ In the same output group and same output as your video, create one captions tab for each output Teletext page.
+ In each output captions tab, choose one of your input captions selectors for **Captions source**.
+ In each output captions tab, for **Page number**, specify the Teletext page number that you want for those captions in your output. Optionally, provide values for **Language** and **Description**.

# Burn-in output captions
<a name="burn-in-output-captions"></a>

 This section covers how to configure burn-in output captions in AWS Elemental MediaConvert. The main topics include:
+ Where to specify the captions.
+ How to specify multiple captions tracks.
+ How to use style passthrough.
+ Non-english fonts and unsupported characters.

*Burn-in* is a way to deliver captions, rather than a captions format. Burn-in writes the captions directly on your video frames, replacing pixels of video content with the captions. If you want burn-in captions in an output, set up the captions according to the following information.

## Where to specify the captions
<a name="where-burn-in-output-captions"></a>

Put your captions in the same output group and the same output as your video.

## How to specify multiple captions tracks
<a name="multilang-burn-in-output-captions"></a>

You can burn in only one track of captions in each output.

## How to use style passthrough
<a name="how-to-use-style-passthrough"></a>

You can choose how to stylize the burn-in caption text that appears in your output video. There are a few options, including style passthrough, default settings, or manual overrides. 

When you set Style passthrough to Enabled, MediaConvert uses the available style and position information from your input captions. Note that MediaConvert uses default settings for any missing style information. 

MediaConvert supports style passthrough for the following input caption formats: 
+ Ancillary
+ Embedded
+ SCTE-20
+ SCC
+ TTML
+ STL (EBU STL)
+ SMPTE-TT (text based)
+ Teletext
+ IMSC
+ WebVTT

When you set Style passthrough to Disabled, MediaConvert ignores style information from your input and uses default settings: white text with black outlining, bottom-center positioning, and automatic sizing. 

 Whether you set style passthrough to enabled or not, you can also choose to manually override any of the individual style options. 

**Note**  
TTML and TTML-like (IMSC, SMPTE-TT) inputs have special style formatting requirements. For more information, see [TTML style formatting](ttml-style-formatting.md).

## How to specify the font script
<a name="how-to-specify-the-language-script-burnin"></a>

AWS Elemental MediaConvert automatically selects the appropriate script for your captions, based on the language that you specify in the output captions settings. If the language that you choose has more than one possible script, specify the script that you want.

**To ensure that the service uses the correct font script**

1. In the **Captions** section under **Encoding settings**, for **Language**, choose the language of the captions text.

1. If the language that you specify has more than one possible script, use **Font script** to specify the script.

   For example, if you choose **Chinese** (ZH) for **Language**, use **Font script** to choose either **Simplified Chinese** or **Traditional Chinese**. In this case, if you don’t specify a value for **Font script**, the service defaults to simplified Chinese. 
**Tip**  
In most cases, for **Font script** you can keep the default value of **Automatic**. When you do, the service chooses the script based on the language of the captions text.

## Non-english fonts and unsupported characters
<a name="non-english-unsupported"></a>

When your input font uses a non-English font script, your output burn-in captions may contain unsupported Unicode characters `□`. To resolve, set **Style passthrough** to **Enabled**.

# Settings for accessibility captions
<a name="accessibility-captions"></a>

When you create an HLS or CMAF HLS output and include an ISMC or WebVTT captions track, you can add accessibility attributes for captions to your output manifest. MediaConvert adds these attributes according to sections 4.5 and 4.6 of the [HLS authoring specification for Apple devices](https://developer.apple.com/documentation/http_live_streaming/hls_authoring_specification_for_apple_devices).

When you set **Accessibility subtitles** (`accessibility`) to **Enabled** (`ENABLED`), MediaConvert adds the following attributes to the captions track in the manifest under `EXT-X-MEDIA`: `CHARACTERISTICS="public.accessibility.describes-spoken-dialog,public.accessibility.describes-music-and-sound"` and `AUTOSELECT="YES"`.

Keep the default value, **Disabled** (`DISABLED`), if the captions track is not intended to provide such accessibility. MediaConvert will not add the attributes from the previous paragraph.

# Using output groups to specify a streaming package type or standalone file
<a name="outputs-file-ABR"></a>

AWS Elemental MediaConvert output functions differ based on which type of output group it is a part of.

File  
 In a **File** output group, each output that you set up results in a standalone output file.  
For example, you might set up one output that contains all the video, audio, and captions together. You might also set up a separate output for sidecar captions, such as TTML.

Streaming output packages  
In the following output groups, the outputs that you set up are separate parts of a single adaptive bitrate (ABR) streaming package: CMAF, Apple HLS, DASH ISO, and Microsoft Smooth Streaming.

In an ABR output group, each output is usually one element of the media. That is, each output is one slice in the adaptive bitrate (ABR) stack. For example, you might have an output for each of three resolutions of video, an output for each of two audio language tracks, and an output for each of two captions languages.

The following illustration shows the relationship between outputs in an ABR output group and the files that MediaConvert creates. Each orange box corresponds to an output within the output group. In this example, there are three resolutions of video, audio in two languages, and captions in two languages. The package contains segmented audio, video, and captions files, plus manifest files that tell the player which files to download and when to play them.

![\[Each rendition in an ABR stack with its own output in the output group.\]](http://docs.aws.amazon.com/mediaconvert/latest/ug/images/ABRsegSeparately.png)


A single job can generate zero to many  standalone files and zero to many streaming packages. To create more than one standalone file, add a single file output group to your job and add multiple outputs to that output group. To create more than one streaming package, add multiple **CMAF**, **AppleHLS**, **DASH ISO**, or **Microsoft Smooth Streaming** output groups to your job.

The following illustration shows a MediaConvert job that generates two standalone .mp4 files, two Apple HLS packages, and a CMAF package. A single file output group with two outputs results in two standalone files. A single Apple HLS output group with seven outputs results in a single viewable package with seven ABR slices. 

![\[MediaConvert job generating two standalone .mp4 files, two Apple HLS packages, and a CMAF package.\]](http://docs.aws.amazon.com/mediaconvert/latest/ug/images/jobSetupToOutput.png)


For information about setting up output groups and outputs within your job, see [Tutorial: Configuring job settings](setting-up-a-job.md).

# Choosing your ABR streaming output groups
<a name="choosing-your-streaming-output-groups"></a>

To create media assets for people to stream to their devices, choose one or more of the adaptive bitrate (ABR) output groups: Apple HLS, DASH ISO, Microsoft Smooth Streaming, or CMAF. The type of output group determines which media players can play the files that MediaConvert creates from that output group.

**Note**  
When you set up CMAF, DASH ISO, or Microsoft Smooth Streaming output groups, make sure to set your fragment length correctly. For information about setting fragment length, see [Setting the fragment length for streaming outputs](setting-the-fragment-length.md).

The following table summarizes the relationships between output groups and media players.


| Media players | Use this output group | 
| --- | --- | 
| Apple devices, earlier than approximately 2013 | Apple HLS | 
| Apple devices, newer | CMAF | 
| Android devices, most smart TVs | CMAF or DASH ISO | 
| Microsoft devices | Microsoft Smooth Streaming | 

**Note**  
MediaConvert bills for each minute of transcoded output time, not per job. Therefore, when you add output groups to a job, it becomes more expensive.   
For example, a job with an Apple HLS package and a DASH ISO package costs twice as much as a job with only one of those packages. This is assuming that the transcoding settings are the same.

**To determine which output groups you need**

1. Decide which devices you want end viewers to be able to play the transcoded media asset on. If you want your asset to play on every possible device, include these output groups:
   + Apple HLS
   + DASH ISO or CMAF
   + Microsoft Smooth Streaming

1. Consider whether to use advanced encoding features. To deliver either of the following to Apple devices, you must also include a CMAF output group:
   + High-dynamic-range (HDR) video
   + H.265 (HEVC) encoded video

   If you include a CMAF output, you don't need to create a DASH ISO output because all the common DASH-compatible players are also CMAF-compatible. 
**Note**  
There are a few uncommon DASH players that explicitly require the video segmentation extension type .mp4. MediaConvert outputs CMAF video segments in the .cmfv format. To create output that is compatible with these players, include a DASH ISO output group in your job.

1. Consider cost trade-off.

   If you don't need to support players produced earlier than approximately 2013, and if you don't need to support the rare DASH players that require .mp4 video segments, you can include a single CMAF output group instead of both DASH ISO and Apple HLS. Creating a single CMAF package instead of separate DASH ISO and Apple HLS packages can also offer cost savings in your video storage and distribution. This is because you must store and distribute only one set of video and audio files.

# Setting the fragment length for streaming outputs
<a name="setting-the-fragment-length"></a>

For all ABR streaming output groups other than HLS (CMAF, DASH, and Microsoft Smooth Streaming), the value that you specify for **Fragment length** (`FragmentLength`) must work with the other output settings that you specify. If you set **Fragment length** incorrectly, when viewers watch the output video, their player might crash. This can happen because the player expects additional segments at the end of the video and requests segments that don't exist. 

**Fragment length** is constrained by your values for **Closed GOP cadence** (`GopClosedCadence`), **GOP size** (`GopSize`), and **Frame rate** (`FramerateNumerator`, `FramerateDenominator`). For information about finding these settings on the console and in your JSON job specification, see [Finding the settings related to fragment length](#finding-the-settings-related-to-fragment-length).

**Note**  
When you set your output **Frame rate** to **Follow source**, make sure that the frame rate of your input video file works with the value that you specify for the output **Fragment length**. The frame rate of your input video file functions as your output frame rate. 

**Topics**
+ [Rule for fragment length](#rule-for-fragment-length)
+ [Fragment length examples](#fragment-length-examples)
+ [Finding the settings related to fragment length](#finding-the-settings-related-to-fragment-length)

## Rule for fragment length
<a name="rule-for-fragment-length"></a>

Fragment length must be a whole number and must be a multiple of this value: **GOP size** x **Closed GOP cadence** ÷ **Frame rate**

## Fragment length examples
<a name="fragment-length-examples"></a>

**Example: Correct settings**  
Closed GOP cadence = 1

Frame rate = 30

GOP size = 60 frames

Fragment length = 2

**Example: Incorrect settings**  
Closed GOP Cadence = 1

Frame rate = 50

GOP size = 90 frames

Fragment length = 2

## Finding the settings related to fragment length
<a name="finding-the-settings-related-to-fragment-length"></a>

When you set **Fragment length**, check your values for **Closed GOP cadence**, **GOP size**, and **Frame rate**.

### Fragment length
<a name="fragment-length"></a>

You can set the fragment length using either the console or the JSON job specification. The **Fragment length** setting applies to an output group and affects every output in the group.

**To find the **Fragment length** setting (console)**

1. On the **Create job** page, in the **Job** pane on the left, under **Output groups**, choose the name of your CMAF, DASH ISO, or Microsoft Smooth Streaming output group.

1. In the group settings section on the right, find **Fragment length**. 

   The group settings section is titled **CMAF group settings**, **DASH ISO group settings**, or **MS Smooth group settings**.

**To find the **Fragment length** setting (JSON job specification)**
+ Find `FragmentLength` as a child of `OutputGroupSettings`, as in the following example.

  ```
  {
    "Settings": {
      ...
      "Inputs": [
        ...
      ],
      "OutputGroups": [
        {
          "Name": "DASH ISO",
          "OutputGroupSettings": {
            "Type": "DASH_ISO_GROUP_SETTINGS",
            "DashIsoGroupSettings": {
              "SegmentLength": 30,
              "FragmentLength": 2,
              "SegmentControl": "SINGLE_FILE",
              "HbbtvCompliance": "NONE"
            }
          },
  		...
  ```

### Closed GOP cadence, GOP size, and frame rate
<a name="closed-gop-cadence-gop-size-and-framerate"></a>

You can set **Closed GOP cadence**, **GOP size**, and **Frame rate** using either the console or the JSON job specification. These settings apply to each output individually. Make sure that the values that you set for each output in the output group work with the value that you specify for the output group's **Fragment length**.

**Note**  
Your ABR stack has multiple outputs. Make sure to set these values in each output.

**To find the encoding settings for an output (console)**

1. On the **Create job** page, in the **Job** pane on the left, under **Output groups**, choose the name of your output, such as **Output 1**, **Output 2**, and so on.

1. In the **Encoding settings** section, the **Video** tab is selected automatically. Find **Closed GOP cadence**, **GOP size**, and **Frame rate** on this tab.

**To find the encoding settings for an output (JSON job specification)**
+ 

Find `GopClosedCadence`, `GopSize`, `FramerateNumerator`, and `FramerateDenominator` as children of the codec settings, as in the following example. In this example, the codec is `H_264`, so the parent of the codec settings is `H264Settings`.

  ```
  {
    "Settings": {
      ...
      "Inputs": [
        ...
      ],
      "OutputGroups": [
        {
          "Name": "DASH ISO",
          ...
          },
          "Outputs": [
            {
              "VideoDescription": {
                ...
                "CodecSettings": {
                  "Codec": "H_264",
                  "H264Settings": {
                    "InterlaceMode": "PROGRESSIVE",
                    "NumberReferenceFrames": 3,
                    "Syntax": "DEFAULT",
                    "Softness": 0,
                    "GopClosedCadence": 1,
                    "GopSize": 60,
  				  ...
                    "FramerateNumerator": 60,
                    "FramerateDenominator": 1
                  }
                },
                ...
              },
  ```

# HLS player version support
<a name="hls-player-version-support"></a>

AWS Elemental MediaConvert automatically sets the player version metadata based on the features that you enable. Most HLS assets that you create with MediaConvert are compatible with HLS players version 2 and later.

This list shows the features that might require updated player support:

**Add I-frame only manifest**: HLS Output group > Output > Advanced > Add I-frame only manifest  
When you choose **Include**, viewers can play the asset with HLS players version 4 and later.  
When you choose **Exclude**, viewers can play the asset with HLS players version 2 and later.

**Audio track type**: HLS Output group > Output > Output settings > Advanced > Audio track type  
When you choose one of the **Alternate audio** options for any audio variants, viewers can play the asset with HLS players version 4 and later.  
When you choose **Audio-only variant stream** for **Audio track type**, or keep **Audio track type** unselected for all of your audio variants, viewers can play the asset with HLS players version 2 and later.

**DRM encryption method**: HLS output group > DRM encryption > Encryption method  
When you choose **SAMPLE-AES** for **DRM encryption**, **Encryption method**, viewers can play the asset with HLS players version 5 and later.  
When you choose any other value for **DRM encryption**, **Encryption method**, viewers can play the asset with HLS players version 2 and later.

**Descriptive video service flag**: HLS output group > Output (must be audio-only) > Output settings > Descriptive video service flag  
This setting is also available in CMAF output groups: CMAF output group > Output > CMAF container settings > Advanced > Descriptive video service flag  
To find this setting, your HLS or CMAF output must have only audio settings. In HLS outputs, you must remove the default **Video** tab.  
When you choose **Flag** for **Descriptive video service flag**, viewers can play the asset with HLS players version 5 and later.  
To create a compliant Apple HLS output: When you set **Descriptive video service flag** to **Flag**, you must also set **Audio track type** to **Alternative audio, auto select, default** or **Alternative audio, auto select, not default**.

**Manifest duration format**: HLS output group > Apple HLS group settings > Advanced > Manifest duration format   
When you set your manifest duration format to **Integer**, viewers can play the asset with HLS players version 2 and later.  
When you set your manifest duration format to **Floating point**, viewers can play the asset with HLS players version 3 and later.

**Segment control**: HLS output group > Apple HLS group settings > Segment control  
When you set segment control to **Single file**, viewers can play the asset with HLS players version 4 and later.  
When you set segment control to **Segmented files**, viewers can play the asset with HLS players version 2 and later.

# Recommended encoding settings for video quality
<a name="video-quality"></a>

When you create a job with AWS Elemental MediaConvert, the encoding settings that you choose affect video quality, file size, and player compatibility. 

You can configure your job to allow MediaConvert to automatically select the best encoding settings for video quality, with a balanced output file size. Or, you can manually specify encoding settings to match your output or delivery requirements.

This section introduces basic concepts, describes typical settings, and provides guidance for choosing settings optimized for video quality.

**Topics**
+ [Reference for GOP structure and frame types](#gop-structure)
+ [GOP size recommended setting](#gop-size-settings)
+ [B-frames between reference frames recommended setting](#reference-frames)
+ [Closed GOP cadence recommended setting](#closed-gop-cadence)
+ [Dynamic sub-GOP recommended setting](#dynamic-sub-gop)
+ [GOP reference B-frames recommended setting](#gop-reference-b-frames)
+ [Min I-Interval recommended setting](#min-i-interval)
+ [Adaptive quantization recommended setting](#adaptive-quantization)

## Reference for GOP structure and frame types
<a name="gop-structure"></a>

When you create a job, the group of pictures (GOP) settings that you choose for your output affect video quality and player compatibility. This section introduces basic GOP concepts, describes typical GOP settings, and provides guidance for choosing settings optimized for video quality. 

A GOP is a specific arrangement of compressed video frame types. These frame types include the following:

 **I-Frames**   
Intra-coded frames. Contain all of the information that a decoder uses decode the frame. Typically, I-frames use the most number of bits within a video stream.

 **IDR-Frames**   
Instantaneous Decoder Refresh frames. Similar to I-frames, they contain all of the information that a decoder uses to decode the frame. However, frames cannot reference any frame that comes before an IDR-frame.

 **P-Frames**   
Predicted frames. Contain the differences between the current frame and one or more frames before it. P-frames offer much better compression than I-frames and use fewer bits within a video stream.

 **B-Frames**  
Bidirectional predicted frames. Contain the differences between the current frame and one or more frames before or after it. B-frames offer the highest compression and take up the fewest bits within a video stream.

A typical GOP starts with an IDR-frame and follows with a repeating pattern of B- and P-frames. For example: `IDRBBPBBPBBPBB`

The following topics provide more information about individual GOP settings and recommend settings that are optimized for video quality.

## GOP size recommended setting
<a name="gop-size-settings"></a>

GOP size is the number of frames in a GOP, and it defines the interval between IDR-frames. For example, if a GOP starts with an IDR-frame and has a combination of 29 B and P-frames, the GOP size is 30 frames. 

A typical GOP size is 1–2 seconds long and corresponds to the video frame rate. For example, if your output frame rate is 30 frames per second, a typical GOP size is 30 or 60 frames.

When you set your output video codec to `AVC (H.264)` or `HEVC (H.265)`, set **GOP mode control** to `Auto`. This allows MediaConvert to select an optimal GOP size.

**Note**  
Streaming video formats, including HLS, DASH, CMAF, and MSS, require the fragment or segment length to be a multiple of the GOP size. For more information, see [Setting the fragment length for streaming outputs](setting-the-fragment-length.md). When you set GOP mode control to Auto for these video formats, MediaConvert automatically selects a compatible and optimized GOP size that's relative to the fragment or segment length.

## B-frames between reference frames recommended setting
<a name="reference-frames"></a>

Defines the maximum number of B-frames that MediaConvert can use between reference frames.

A typical value is 1 or 2 if **GOP reference B-Frames** is set to `Disabled`, and 3–5 if **GOP reference B-frames** is set to `Enabled`.

When you set your output video codec to `AVC (H.264)` or `HEVC (H.265)`, keep **B-frames between reference frames** blank. This allows MediaConvert to select an optimal number of B-frames between reference frames.

## Closed GOP cadence recommended setting
<a name="closed-gop-cadence"></a>

**Closed GOP cadence** defines the number of GOPs a P- or B-frame is able to reference across. A GOP can either be *open* or *closed*. Open GOPs can have frames that reference a frame from a different GOP, while closed GOPs have frames that reference only within the GOP itself. 

When you set your output video codec to `AVC (H.264)` or `HEVC (H.265)`, keep **Closed GOP cadence** blank to allow MediaConvert to select an optimal closed GOP cadence.

## Dynamic sub-GOP recommended setting
<a name="dynamic-sub-gop"></a>

A dynamic sub-GOP can improve the subjective video quality of high-motion content. It does this by allowing the number of B-frames to vary.

When you set your output video codec to `AVC (H.264)` or `HEVC (H.265)`, set **Dynamic sub-GOP** to `Adaptive`. This allows MediaConvert to determine an optimal sub-GOP.

## GOP reference B-frames recommended setting
<a name="gop-reference-b-frames"></a>

When you set your output video codec to `AVC (H.264)` or `HEVC (H.265)`, set **GOP reference B-frames** to `Enabled` to allow B-frames to be referenced by other frame types. This improves the video quality of your output relative to its bitrate.

## Min I-Interval recommended setting
<a name="min-i-interval"></a>

Min I-Interval enforces a minimum number of frames between IDR-frames. This includes frames that are created at the beginning of a GOP or by scene change detection. Use Min I-Interval to improve video compression by varying GOP size when two IDR-frames would be created near each other.

When you set your output video codec to `AVC (H.264)` or `HEVC (H.265)`, keep **Min I-Interval** blank. This allows MediaConvert to select an optimal minimum I-interval.

## Adaptive quantization recommended setting
<a name="adaptive-quantization"></a>

Adaptive quantization selects the strength applied to the different quantization modes that MediaConvert uses, including flicker, spatial, and temporal quantization. MediaConvert uses adaptive quantization to assign bits according to the complexity of your video.

When you set your output video codec to `AVC (H.264)`, `HEVC (H.265)`, or `XAVC`, set **Adaptive quantization** to `Auto` to allow MediaConvert to select an optimal adaptive quantization.

# Using variables in your job settings
<a name="using-variables-in-your-job-settings"></a>

You can use variables, also called *format identifiers*, in your job settings. Format identifiers are values that you can put in your job settings that resolve differently in your outputs depending on the characteristics of the input files or the job. They are particularly useful in output presets, job templates, and jobs that you intend to duplicate and re-use. Note that variables are case sensitive.

For example, you might use the date format identifier `$d$` for your **Destination** setting. If you want your outputs organized by the date and time that the job starts, for **Destination** you might enter **s3://amzn-s3-demo-bucket1/\$1d\$1/**. For a job that starts June 4, 2020, the service will create your outputs in `s3://amzn-s3-demo-bucket1/20200604/`.

For a list of the available format identifiers and examples of how to use them, see [List of settings variables with examples](#list-of-settings-variables-with-examples).

For information about format identifiers that function differently in streaming outputs, see [Using settings variables with streaming outputs](#using-settings-variables-with-streaming-outputs).

**Topics**
+ [List of settings variables with examples](#list-of-settings-variables-with-examples)
+ [Using settings variables with streaming outputs](#using-settings-variables-with-streaming-outputs)
+ [Specifying a minimum number of digits](#specifying-a-minimum-number-of-digits)

## List of settings variables with examples
<a name="list-of-settings-variables-with-examples"></a>

The following table provides information about each of the format identifiers that you can use in your AWS Elemental MediaConvert job. For information about format identifiers that function differently in streaming outputs, see [Using settings variables with streaming outputs](#using-settings-variables-with-streaming-outputs).


| Format identifier | Value to put in the job setting | Compatible job settings | Description and example | 
| --- |--- |--- |--- |
| Date and time |  `$dt$`  |  Destination Name modifier Segment modifier  |  UTC date and time of the start time of the job. Format: YYYYMMDDTHHMMSS Example: For a job that starts at 3:05:28 PM on June 4, 2020, **\$1dt\$1** resolves to `20200604T150528`.   | 
| Date |  `$d$`  |  Destination Name modifier Segment modifier  |  UTC date of the start time of the job.  Format: YYYYMMDD Example: For a job that starts on June 4, 2020, **\$1d\$1** resolves to `20200604`.   | 
| Time |  `$t$`  |  Destination Name modifier Segment modifier  |  UTC start time of the job.  Format: HHMMSS Example: For a job that starts at 3:05:28 PM, **\$1t\$1** resolves to `150528`.   | 
| Video bitrate |  `$rv$`  |  Name modifier Segment modifier  |  The video bitrate of the output, in kilobits. For QVBR outputs, the service uses video max bitrate, in kilobits. Example: If you set **Encoding settings**, **Video**, **Bitrate (bits/s)** to **50000000**, **\$1rv\$1** resolves to `50000`.  | 
| Audio bitrate |  `$ra$`  |  Name modifier Segment modifier  |  Total of all the audio bitrates in the output, in kilobits. Example: If you have an output with a single audio tab and you set **Encoding settings**, **Audio 1**, **Bitrate (kbit/s)** to **256000**, **\$1ra\$1** resolves to `256000`.  | 
| Container bitrate |  `$rc$`  |  Name modifier Segment modifier  |  Combined audio and video bitrate for the output, in kilobits. Example: You have an output with a **Video** settings tab and **Audio 1** settings tab. If you set **Encoding settings**, **Video**, **Bitrate (bits/s)** to **5000000** and you set **Encoding settings**, **Audio**, **Bitrate (bits/s)** to **96000** (96 kilobits), **\$1rc\$1** resolves to `5096`.  | 
| Video frame width |  `$w$`  |  Name modifier Segment modifier  |  The frame width, or horizontal resolution, in pixels. Example: If you set **Encoding settings**, **Video**, **Resolution (w x h)** to **1280** x **720** , **\$1w\$1** resolves to `1280`.  | 
| Video frame height |  `$h$`  |  Name modifier Segment modifier  |  The frame height, or vertical resolution, in pixels. Example: If you set **Encoding settings**, **Video**, **Resolution (w x h)** to **1280** x **720** , **\$1h\$1** resolves to `720`.  | 
| Framerate |  `$f$`  |  Name modifier Segment modifier  |  Framerate, in frames per second, truncated to the nearest whole number.  Example: If your framerate is **59.940**, **\$1f\$1** resolves to `59`.   | 
| Input file name |  `$fn$`  |  Destination Name modifier Segment modifier  |  Name of the input file, without the file extension. For jobs that have multiple inputs, this is the first file specified in the job. Example: If **Input 1** for your job is **s3://amzn-s3-demo-bucket/my-video.mov**, **\$1fn\$1** resolves to `my-video`.  | 
| Output container file extension |  `$ex$`  |  Name modifier Segment modifier  |  Varies depending on the output group. For **File group** outputs, this is the extension of the output container file. For other output groups, this is the extension of the manifest. Example for file group: If you choose **MPEG2-TS** for **Output settings**, **Container**, **\$1ex\$1** resolves to `m2ts`. Example for HLS group: If your output group is HLS, **\$1ex\$1** resolves to `m3u8`.  | 
| \$1 |  `$$`  |  Name modifier Segment modifier  |  Escaped `$`. Example:  Suppose that you provide the following values:    Input file name: **file1.mp4**   Destination: **s3://amzn-s3-demo-bucket/**   Name modifier: **my-video\$1\$1hi-res-**   Your output file name and path resolves to `s3://amzn-s3-demo-bucket/my-video$hi-res-file1.mp4`.  | 

## Using settings variables with streaming outputs
<a name="using-settings-variables-with-streaming-outputs"></a>

Variables in your job settings, also called *format identifiers*, function differently for outputs in Apple HLS and DASH ISO output groups. Here are the differences:

**For Apple HLS Outputs**  
When you use the date and time format identifiers (`$dt$`, `$t$`, `$d$`) in the **Segment modifier** setting, these format identifiers resolve to the completion time of each segment, rather than to the start time of the job.

**Note**  
For jobs that use accelerated transcoding, segments might complete at the same time. This means that date and time format identifiers don't always resolve to unique values.

**For DASH ISO Outputs**  
You can use two additional format identifiers in the **Name modifier** setting. These affect the DASH manifest in addition to the output file name. Here are the identifiers:

\$1Number\$1  
In your output file names, `$Number$` resolves to a series of numbers that increment from 1. This replaces the default, nine-digit segment numbering in the segment file names. For example:   
+ If you specify **video\$1\$1Number\$1** for **Name modifier**, the service creates segment files named `video_1.mp4`, `video_2.mp4`, and so on.
+ If you specify only **video\$1** for **Name modifier**, the service creates segment files named `video_000000001.mp4`, `video_000000002.mp4`, and so on.
In your DASH manifest, AWS Elemental MediaConvert includes `duration` and `startNumber` inside the `SegmentTemplate` element, like this: `<SegmentTemplate timescale="90000" media="main_video_$Number$.mp4" initialization="main_video_$Number$init.mp4" duration="3375000"/>`  
If you use the `$Number$` format identifier in an output, you must also use it in every other output of the output group.

\$1Bandwidth\$1   
In your output file names, `$Bandwidth$` resolves to the value of **Video**, **Bitrate** plus the value of **Audio**, **Bitrate** in the output. Regardless of whether you include this format identifier, the service uses nine-digit segment numbering in the segment file names.  
For example, suppose you specify these values:  
+ **Video**, **Bitrate (bits/s)**: **50000000** 
+  **Audio**, **Bitrate (kbits/s)**: **96.0** (96,000 bits/s)
+ **Name modifier**: **video\$1\$1Bandwidth\$1**
The value for \$1Bandwidth\$1 resolves to 50,096,000. The service creates segment files named `video_50096000_000000001.mp4`, `video_50096000_000000002.mp4`, and so on.  
In the manifest, AWS Elemental MediaConvert includes `duration` and `startNumber` inside the `SegmentTemplate` element, like this: `<SegmentTemplate timescale="90000" media="main_video_$Bandwidth$.mp4" initialization="main_video_$Bandwidth$init.mp4" duration="3375000"/>`.

\$1Time\$1  
In your output file names, `$Time$` resolves to the duration, in milliseconds, of the segment. When you include this format identifier, the service doesn't use the default nine-digit segment numbering in the segment file names.  
For example, if you specify **video180\$1\$1\$1Time\$1** for **Name modifier**, the service creates segment files named `video180__345600.mp4`, `video180__331680.mp4`, and so on. In these examples, the segment durations are 345,600 ms and 331,680 ms.  
In the manifest, AWS Elemental MediaConvert includes `SegmentTimeline` inside the `SegmentTemplate` element, like this:   

```
<Representation id="5" width="320" height="180" bandwidth="200000" codecs="avc1.4d400c">
        <SegmentTemplate media="video180_$Time$.mp4" initialization="videovideo180_init.mp4">
          <SegmentTimeline>
            <S t="0" d="345600" r="2"/>
            <S t="1036800" d="316800"/>
          </SegmentTimeline>
        </SegmentTemplate>
      </Representation>
```
If you use the `$Time$` format identifier in an output, you must also use it in every other output of the output group.

\$1RepresentationID\$1  
In your output file names, `$RepresentationID$` resolves to an output's numerical order in your job settings .  
In the manifest, AWS Elemental MediaConvert uses this identifier in the `SegmentTemplate` element to reference the correct paths for each representation.  
This format identifier is particularly useful when you need to organize your DASH outputs by representation ID.

## Specifying a minimum number of digits
<a name="specifying-a-minimum-number-of-digits"></a>

For format identifiers that return a number, you can specify a minimum number of digits that the format identifier will resolve to. When you do, the service adds padding zeros before any value that would return fewer digits.

Use the following syntax to specify the number of digits: **%0[number of digits]**. Put this value just before the final `$` of the format identifier.

For example, suppose that your video frame height is 720 and you want to specify a minimum of four digits, so that it appears in your file name as `0720`. To do that, use the following format identifier: **\$1h%04\$1**.

**Note**  
Values that are too large to be expressed in the number of digits you specify resolve with more digits.