

# Setting up for captions


When you create an event, you must specify the format of the input captions. On the output side, you must specify the desired formats of the captions for each output. When you save the event, Elemental Live validates your choices in terms of whether the specified input format can produce the specified output format, and whether that output format is supported in the specified output type.

**Topics**
+ [

# Step 1: Identify the source captions that you want
](identify-captions-in-the-input.md)
+ [

# Step 2: Create captions selectors
](create-caption-selectors.md)
+ [

# Step 3: Plan captions for the outputs
](planning-captions-in-the-outputs.md)
+ [

# Step 4: Match formats to categories
](categories-captions.md)
+ [

# Step 5: Create captions encodes
](create-captions-encodes.md)

# Step 1: Identify the source captions that you want
Step 1: Identify source captions

You must identify the captions that you want to use and assign each to a captions selector. If you don't create any captions selectors, you will not be able to include captions in the output. All the captions will be removed from the media.

**To identify the captions you want**

1. Identify which captions are in the input (the provider of the input should provide you with this information) and identify which captions are available to you as external files. Identify the captions formats and, for each format, the languages. 

1. Identify which of those formats and languages that you want to use.

1. Determine how many captions selectors to create in the input in the event, using the following guidance: 
   + For embedded passthrough, create a single captions selector for all languages. All languages are passed through; there is no other option. For details, see [Information for embedded](embedded.md).
   + For embedded-to-other-format, create one captions selector for each language.
   + For teletext passthrough, create a single captions selector for all languages (in fact, one captions selector for the entire content). All languages (teletext pages) are passed through; there is no other option. For details, see [Information for Teletext](teletext.md).
   + For teletext-to-other-format, create one captions selector for each language.
   + In all other cases, create one captions selector for each language and format combination.

1. You end up with a list of captions selectors to create. For example:
   + Captions Selector 1: teletext captions in Czech
   + Captions Selector 2: teletext captions in Polish

   You are not required to use all the languages that are available. You can ignore those you are not interested in.

# Step 2: Create captions selectors


After you have created a list of captions selectors, you can create the captions selectors in the event. 

**To create the captions selectors**

1. In the event, in the **Input** section, choose **Advanced**.

1. Choose **Add Caption Selector**.

1. For **Source**, choose the format of the source captions. 

   To identify SMPTE-TT as the source captions, choose **TTML**. When Elemental Live ingests the captions, it automatically detects that they are SMPTE-TT.

1. For most formats, more fields appear. For details about a field, choose the Info link next to the field. In addition, see extra information about [DVB-Sub or SCTE-27](dvb-sub-or-scte27.md), on [Embedded](embedded.md), on [SCC](captions-input-scc.md), on [SMI, SRT, STL, TTML](captions-input-other-sidecars.md), on [teletext](teletext.md), or on [Null](captions-input-null.md).

1. Create more captions selector, as required. 

# Information for DVB-Sub or SCTE-27


This section provides information specific to DVB-Sub or SCTE-27 input captions. It describes the fields that appear when you choose **DVB-Sub** or **SCTE-27** in the **Source** field in the **Caption Selector** section of the event. For more context, see the steps earlier in this section.

DVB-Sub and SCTE-27 formats are supported only in TS inputs. You must specify the location of the captions.

Complete the **PID** and **Language code** fields in one of the ways described in the following table. Each row in the table describes a valid way to complete these two fields.


****  

| PID | Language code | Result | 
| --- | --- | --- | 
| Specified | Blank | Extracts the captions from the specified PID. | 
| Blank | Specified | Extracts the captions from the first PID that Elemental Live encounters that matches the specified language. This might or might not be the PID with the lowest number. | 
| Specified | Specified | Extracts the captions from the specified PID. Elemental Live ignores the language code, therefore we recommend you leave it blank. | 
| Blank | Blank | Valid only if the source is DVB-Sub and the output is DVB-Sub. With this combination of PID and Language, all input DVB-Sub PIDs will be included in the output.Not valid for SCTE-27. | 

# Information for embedded


This section provides information specific to embedded input captions. It describes the fields that appear when you choose **Embedded** in the **Source** field in the **Caption Selector** section of the event. For more context, see the steps earlier in this section.

Read this section if the input captions you have are any of the following: embedded (EIA-608 or CEA-708), embedded\$1SCTE-20, SCTE-20\$1embedded, or SCTE-20.

**Note**  
For captions in VBI data: If you are extracting embedded captions from the input and using embedded captions in the output, and if the input includes VBI data and you want to include all that data in the output, then do not follow this procedure. Instead, see [Passing through VBI data](captions-in-vbi-data.md).

# Determining the number of captions selectors needed
Number of captions selectors

To determine the number of captions selectors you need to created in the event, follow these rules:
+ **Embedded Passthrough** – Create only one captions selector. With this scenario, all languages are automatically extracted and are automatically included in the output.
+ **Embedded In, Other Out** – If you are setting up embedded-to-other, create one captions selector for each language that you want to include in the output, to a maximum of four selectors.
+ **A combination of Embedded passthrough and Embedded conversion** – If you are setting up embedded passthrough in some outputs and embedded-to-other in other outputs, create one captions selector for each language that you want to include in the output, to a maximum of four selectors. Do not worry about a selector for the embedded passthrough output. Elemental Live will extract all the languages for that output, even though no selector exists to explicitly specify this action. 

# Completing the fields in the captions selector group
Fields in the selector group
+ **Source**: 
  + Choose embedded if the source captions are embedded (EIA-608 or CEA-708), embedded\$1SCTE-20, or SCTE-20\$1embedded.
  + Choose SCTE-20 if the source captions are SCTE-20 alone.

# Completing the fields in the CC channel number
Fields for CC channel number
+ **CC Channel number**: This field specifies the language to extract. Complete as follows: 
  + If you are setting up embedded passthrough only (you are creating only one captions selector for the input embedded captions), this field is ignored, so keep the default.
  + If you are setting up embedded-to-another-format, (you are creating several captions selectors, one for each language), enter the number of the CC instance (from the input) that holds the desired language. For example, if this captions selector is intended to hold the French captions and the French captions are in event 2, enter 2 in this field.
+ **Force 608 to 708 Upconvert**: The embedded source captions can be EIA-608 captions, CEA-708 captions, or both EIA-608 and CEA-708. You can specify how you want these captions to be handled when Elemental Live is ingesting content. The following table describes the behavior for various scenarios.    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/elemental-live/latest/ug/cc-fields.html)
+ **Use SCTE-20 if Embedded Unavailable**: This field appears only if you set the **Source** to **Embedded**. If the source captions combine embedded (EIA-608 or CEA-708) and SCTE-20, you might want to set this field to **Auto**. Elemental Live will give preference to the 608/708 embedded captions but will switch to use the SCTE-20 captions when necessary. If you set this field to Off, Elemental Live will never use the SCTE-20 captions.

# Information for SCC


This section provides information specific to SCC input captions. It describes the fields that appear when you choose **SCC** in the **Source** field in the **Caption Selector** section of the event. For more context, see this procedure.

SCC source captions are supplied in a captions file that is external to the video input. You must specify this file.
+ **External Caption File**: Specify the location of the file. 
+ **Time Delta**: Complete this field to adjust the timestamp in the caption file. With the SCC files, the situation sometimes arises where the timestamp in the file for the first captions does not work with the video. The start time for the video/audio always 00:00:00. The start time of the captions may not be 00:00:00 – it may be some completely different, arbitrary time, such as 20:00:15. Assume that, in the video, the first words are spoken at 00:06:15. But given that the start time for the captions file is 20:00:15, then the time for the first caption will be marked as 20:06:30. This time will usually never work with the video. The solution is to adjust the times in the captions file. In this example, subtract 20 hours and 15 seconds (72015 seconds) from the captions file. 

  Enter a value in this field to push the captions earlier or later: 
  + Enter a positive number to add to the times in the caption file. For example, enter **15** to add 15 seconds to all the times in the caption file. 
  + Enter a negative number to subtract from the times in the caption file. For example, enter **-5** to remove 5 seconds from all the times in the caption file. 

  The format of the times in the captions does not have to match the value in the **Timecode Config** field (in the Input) of the video. The number you enter in this field will simply delay the captions or make the captions play earlier, regardless of the formats. 

  When using SCC, the video must absolutely have a value in the **Timecode Config **field. Otherwise the captions will not be inserted.
+ **Force 608 to 708 Upconvert**: SCC source captions are EIA-608 format and are contained in an external file. The options for converting the caption are the following:
  + Check: To convert the captions to CEA-708 format. 
  + Unchecked: To leave the captions unconverted. 

# Information for SMI, SMPTE-TT, SRT, STL, TTML


This section provides information specific to SMI, SMPTE-TT, SRT, STL, and TTML input captions. describes the fields that appear when you choose **SCC** in the **Source** field in the **Caption Selector** section of the **Create New Live Event** screen. For more context, see [Step 1: Identify the source captions that you want](identify-captions-in-the-input.md).

With these formats, the source captions are supplied in a captions file that is external to the video input. You must specify this file.
+ **External Caption File**: Specify the location of this file. 
+ **Time Delta**: Complete this field to adjust the timestamp in the caption file. With the SCC files, the situation sometimes arises where the timestamp in the file for the first captions does not work with the video. With these types of captions, the start time for both the video/audio always 00:00:00. Assume that, in the video, the first words are spoken at 00:06:15. But in the captions file, this time is marked as 00:06:18, and every other caption is also off by 3 seconds. The solution is to adjust the times in the captions file. In this example, subtract 3 seconds from the captions file. 

  Enter a value in this field to push the captions earlier or later.
  + Enter a positive number to add to the times in the caption file. For example, enter **2** to add 2 seconds to all the times in the caption file. 
  + Enter a negative number to subtract from the times in the caption file. For example, enter **-3** to remove 3 seconds from all the times in the caption file. 

# Information for Teletext


This section provides information specific to Teletext input captions. It describes the fields that appear when you choose **SCC** in the **Source** field in the **Caption Selector** section of the event. For more context, see [Step 1: Identify the source captions that you want](identify-captions-in-the-input.md).

Teletext is a form of data that can contain several types of information, not just captions. Teletext can be present in SDI input, in MXF input, and in TS input, in which case it might be referred to as “DVB teletext.”. 

You can set up to handle teletext in one of the following ways:
+ If you want to extract the entire teletext input, you must set up teletext passthrough. The entire teletext can never be converted to another format. Teletext passthrough is supported only in a TS output. 
+ You can extract individual captions pages (the captions in a specific language) and convert them to another captions format.
+ You cannot extract individual captions pages (the captions in a specific language) and keep them in teletext. 

# Determining the number of captions selectors needed
Number of captions selectors
+ If you are setting up teletext passthrough captions, create only one captions selector, even if you want to include multiple languages in the output. With this scenario, all languages are automatically extracted and are automatically included in the output. 
+ If you are setting up teletext-to-other, create one captions selector for each language that you want to include in the output. For example, one selector to extract English teletext, and one selector to extract Swedish teletext.
+ If you are setting up teletext passthrough in some outputs and teletext-to-other in other outputs, create individual selectors for the teletext-to-other, one for each language being converted. Do not worry about a selector for the teletext passthrough output. Elemental Live will extract all the data in the teletext, even though there is not a selector to explicitly specify this action.

# Completing the fields in the Captions Selector Group
Fields in the selector group
+ **Source**: Choose **Teletext**.
+ **Page**: This field specifies the page of the desired language. Complete as follows: 
  + If you are setting up teletext passthrough captions (you are creating only one captions selector for the input captions), leave blank: the value is ignored.
  + If you are converting teletext to another format (you are creating several captions selectors, one for each language), specify the page for the desired language. If you leave this field blank, you get a validation error when you save the event. 

# Information for null


The list of sources in the **Sources** field for the caption selector includes the option **Null**. This source is not intended for stripping out captions. Instead, it is used for [608 XDS data](608-xds-handling.md).

# Step 3: Plan captions for the outputs
Step 3: Plan outputs

If you followed the instructions in [Step 1: Identify the source captions that you want](identify-captions-in-the-input.md), you should have a list of the captions formats and languages that will be available for inclusion in the outputs. 

You must now plan the captions information for the outputs.

**To plan captions for the outputs**

1. Identify the types of output media that you plan to create in the event. For example, MS Smooth and HLS.

1. Identify the streams (the combinations of video and audio) that you plan to create for each output media. 

1. Map each output to the stream it uses. For example:
   + HLS (Output 1) uses video/audio Stream 1.
   + DASH (Output 2) also uses video/audio Stream 1. (Or it might need its own stream if the video requirements are different.)

1. For each output media, identify which input captions will be converted to which output formats. For example, you might convert teletext captions to TTML for the MS Smooth output media, and those same teletext captions to WebVTT for the HLS output media. 

   The output formats that are possible depend on the input formats and the type of output media. See [Reference: Supported captions](supported-captions.md) to determine which output captions are possible given the input format. 

1. Identify the languages for each output format:
   + In general, count each language separately. 
   + Exception: For embedded passthrough, count all languages as one. 
   + Exception: For teletext passthrough, count all languages as one.

**The Result**  
You end up with a list of outputs, and the captions formats and languages for each output. For example:
+ MS Smooth output with TTML captions in Czech
+ MS Smooth output with TTML captions in Polish
+ HLS output with WebVTT captions in Czech
+ HLS output with WebVTT captions in Polish.

## Planning for Output in Multiple Formats


You can include captions from two or more different formats in an output. For example, you can include both embedded captions and WebVTT captions in an HLS output, to give the downstream system more choices about which captions to use. The only rules for multiple formats are the following:
+ The output container must support all the formats. See [Reference: Supported captions](supported-captions.md).
+ The font styles in all the captions that are associated with an output must match. This means that the end result must be identical, not that you must use the same option to get that result. For example, all captions that are associated with the output must be white for the first language and blue for the second language.

Managing this style matching can be a little tricky. For information about the font style options, see [Support for font styles in output captions](support-for-font-styles-in-output-captions.md).

# Step 4: Match formats to categories


There are different procedures to follow to create captions encodes in the output. The correct procedure depends on the "category" that the output captions belong to. There are five categories of captions, described in the following table.

On the list of outputs that you have created, make a note of the category that each captions option belongs to. 


|  Format of output captions  |  Category of this format  | 
| --- | --- | 
| Ancillary\$1Embedded | The captions in ancillary format are in the ancillary data in the stream. The embedded captions are embedded in the video. To choose Ancillary\$1Embedded, choose **Embedded** as the **Destination Type **(in the [procedure](output-embedded-and-more.md)). Elemental Live will automatically produce both Ancillary and embedded.  | 
|  ARIB   |  Object  | 
|  Burn-in  |  Burn-in  | 
|  DVB-Sub  |  Object  | 
|  Embedded  |  Embedded  | 
|  Embedded\$1SCTE-20  |  Embedded   | 
|  RTMP CaptionInfo  |  Object  | 
| RTMP CuePoint | Object | 
| SCC | Sidecar | 
|  SCTE-20\$1Embedded  |  Embedded  | 
| SCTE-27 | Object | 
| SMI | Sidecar | 
| SMPTE-TT | Sidecar when in Archive | 
| SMPTE-TT | Stream when in MS Smooth | 
| SRT | Sidecar | 
|  teletext   |  Object  | 
| TTML wrapped in ID3 data | Wrapped in ID3 data | 
|  TTML  |  Sidecar  | 
|  WebVTT  |  Sidecar  | 

For example, your list of outputs might now look like this: 
+ MS Smooth output with TTML captions (sidecar) in Czech.
+ MS Smooth output with TTML captions (sidecar) in Polish.
+ HLS output with WebVTT captions (sidecar) in Czech.
+ HLS output with WebVTT captions (sidecar) in Polish.

## Captions embedded in video


The captions are carried inside the video encode, which is itself in an output in the output group. Only one captions asset ever exists within that video encode. That single asset might contain captions for several languages. 

## Captions object


The captions are in their own "captions encode" in an output in the output group. They are not part of the video encode. However, they are in the same output as their corresponding video and audio encodes. There might be several captions encodes in the output, for example, for different languages. 

## Sidecar


The captions are each in their own output in the output group, separate from the output that contains the video and audio. Each captions output contains only one captions asset (file), meaning that it is a "captions-only" output. The output group might contain several "captions-only" outputs, for example, one for each language in the output group.

## TTML captions wrapped in ID3 data


The captions are converted to TTML and included in ID3 data. (The other way to produce TTML output is as a sidecar.) 

## SMPTE-TT in MS Smooth


The captions are handled as a separate stream in MS Smooth. 

Note that SMPTE-TT captions for other package types are handled as sidecars. However, for both sidecar handling and stream handling, the [procedure for setting up](output-sidecar-and-smptett-mss.md) SMPTE-TT captions in the output is identical. Elemental Live will package the SMPTE-TT captions correctly for the package type.

## Burn-in


Here, the captions are converted into text and then overlaid on the picture directly in the video encode. Strictly speaking, once the overlay occurs, these are not really captions because they are indistinguishable from the video. 

# Step 5: Create captions encodes


Go through the list of outputs you created and set up the captions in each output group, one by one.

Follow the procedure that applies to the format category of the captions output.

**Topics**
+ [

# All captions except sidecar or SMPTE-TT in MS Smooth
](output-embedded-and-more.md)
+ [

# Sidecar captions or SMPTE-TT captions in MS Smooth
](output-sidecar-and-smptett-mss.md)
+ [

# TTML captions wrapped in ID3 data
](output-ttml-in-id3.md)
+ [

# Setting up for 608 XDS data
](608-xds-handling.md)

# All captions except sidecar or SMPTE-TT in MS Smooth


Follow this procedure if the format of the captions asset that you want to add belongs to the category of embedded, burn-in, or object. You will set up the captions and video and audio in the same output.

**To create captions (*not* sidecar or SMPTE-TT)**

1. On the web interface, on the **Event** screen, click the appropriate output group.

1. If you have already set up this output group with video and audio, find the outputs where you want to add the captions. Or if you have not set up with video and audio, create a new output in this output group; you can set up the captions now and you can set up the video and audio later.

1. Go to the output, then go to the stream that is associated with that output. For example, go to Stream 1.

1. Click the \$1 beside **Caption** to add a Caption section.

1. Complete the fields that appear for the selected format. For details about a field, choose the Info link beside the field.     
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/elemental-live/latest/ug/output-embedded-and-more.html)

1. If the output format is embedded and the output group is HLS, you can include captions language information in the manifest. You perform this setup in the output settings (separate from the captions encode). See [Set up the HLS Manifest (embedded captions)](set-up-the-hls-manifest.md).

1. If the output format is ARIB or DVB-Sub or SCTE-27, you must perform some extra setup in the output settings (separate from the captions encode). See [PIDS for ARIB output](complete-the-pids-for-arib.md) or [PIDs for DVB-Sub output](complete-the-pids-for-dvb-sub.md) or [PIDs for teletext output](complete-the-pids-for-teletext.md).

1. You now have a captions encode that is fully defined.

1. Repeat these steps to create captions, as applicable.

1. Go to the output group and output that this stream belongs to. Set the Stream field in that output to match the stream you created. 

1. When you are ready, save the event. 

   If the “Caption Stream Incompatible” message appears, see ["Caption Stream Incompatible" message](#embedded-caption-incompatible-message).

# Font styles for Burn-in or DVB-Sub Captions
Font styles

When you set up the captions encode as described in [All captions except sidecar or SMPTE-TT in MS Smooth](output-embedded-and-more.md), you can specify the appearance of the captions if the output captions are Burn-in or DVB-Sub. In the following table, the first column shows the field name, the third column specifies how to complete the field, and the third column specifies whether the description applies to Burn-in or DVB-Sub.


|  Name  |  Description  | Applicability | 
| --- | --- | --- | 
|  Font  |  Click **Browse** to find a font file to use. The file must be on a server mounted to the node and must have the extension TTF or TTE.  Do not specify a font file if the caption source is embedded or teletext.  | Both | 
|  Font Size  |  Specify **auto** or enter a number. When set to auto, font\$1size will scale depending on the size of the output. Giving a positive integer will specify the exact font size in points.  | Both | 
|  Font Resolution  | Font resolution in DPI (dots per inch). Range: 96 to 600. Default is 96 dpi. | Both | 
|  Text Justify  | For conversions from STL to Burn-in: This field is ignored; the justification specified in the input STL file is always used. For all other conversions to Burn-in: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/elemental-live/latest/ug/font-styles-for-burn-in-or-dvbsub.html)  | Burn-in | 
| Text Justify |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/elemental-live/latest/ug/font-styles-for-burn-in-or-dvbsub.html)  | DVB-Sub | 
|  X Position  | For conversions from STL to Burn-in: This field is ignored; the position specified in the input STL file is always used. For all other conversions to Burn-in: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/elemental-live/latest/ug/font-styles-for-burn-in-or-dvbsub.html) | Burn-in | 
|  X Position  | Offset for the left edge of the caption relative to the horizontal axis of the video frame, in pixels. 0 is the left edge of the video frame. 10 pixels means offset 10 pixels to the right.Empty means 0 offset. | DVB-Sub | 
|  Y Position  | For conversions from STL to Burn-in: This field is ignored; the position specified in the input STL file is always used.For all other conversions to Burn-in:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/elemental-live/latest/ug/font-styles-for-burn-in-or-dvbsub.html) | Burn-in | 
|  Y Position  | Offset of the top edge of the caption relative to the vertical axis of the video frame, in pixels. 0 is the top edge of the video frame. 10 pixels means offset 10 pixels from the top. Empty means position the captions towards the bottom of the output.  | DVB-Sub | 
| Fixed Grid |  Applies only for conversions from Teletext. [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/elemental-live/latest/ug/font-styles-for-burn-in-or-dvbsub.html) (Note that for conversions from STL to Burn-in, the font is always mono-spaced; this information is never in the STL and the value in the field is ignored.)  | Burn-in | 
| Fixed Grid |  Applies only for conversions from Teletext. [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/elemental-live/latest/ug/font-styles-for-burn-in-or-dvbsub.html) (Note that for conversions from STL to DVB-Sub, the font is always mono-spaced; this information is never in the STL and the value in the field is ignored.)  | DVB-Sub | 
| Font Color |  For conversions from STL: The font color is taken from the STL file but the value in Font Color is used to override as follows: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/elemental-live/latest/ug/font-styles-for-burn-in-or-dvbsub.html) For all other conversions to Burn-in: Select the desired color.  | Burn-in | 
| Font Color | Select the desired color. | DVB-Sub | 
| Font Opacity | The opacity for the font color. Range 0 (transparent) to 255 (opaque). | Both | 
| Background Color | The color for the background rectangle. | Both | 
| Background Opacity | The opacity for the background rectangle. Range 0 (transparent) to 255 (opaque). | Both | 
| Outline Size | The size for the font outline, in pixels. Range 0 (no outline) to 10. | Both | 
| Outline Color | The color for the font outline. | Both | 
| Shadow Color | The color for the shadow cast by the captions. | Both | 
| Shadow Opacity | The opacity of the shadow, in pixels. Range 0 (transparent) to 255 (opaque). Empty means 0. | Both | 
| Shadow X Offset | The horizontal offset of the shadow, in pixels. A value of -2 results in a shadow offset 2 pixels to the left. A value of 2 results in a shadow offset 2 pixels to the right. | Both | 
| Shadow Y Offset | The vertical offset of the shadow, in pixels. A value of -2 results in a shadow offset 2 pixels above the text. A value of 2 results in a shadow offset 2 pixels above the text. | Both | 

**Font Styles When You Use the Same Source in Several Outputs**

If you are using the same caption source in several Stream sections (in other words, you are selecting the same Caption Selector in the Caption Source field in several Stream sections), then you must set up the font style information identically in each Stream section. If you do not, you will get an error when you save the event.

For example, stream 1 may use Caption Selector 1 with the Destination Type set to Burn-in. And stream 2 may also use Caption Selector 1 with the Destination Type set to Burn-in. You set the font information once in stream 1 and again in stream 2. You must make sure to set up all the font information identically in both streams.

The same rule applies if the output captions are all DVB-Sub.

# Complete the PIDs for ARIB
PIDS for ARIB

This section applies when you set up the captions encode as described in [Step 1: Identify the source captions that you want](identify-captions-in-the-input.md), if the output group is UDP/TS and the output captions format is ARIB. It describes how to complete the PIDs for the output that contains these captions.

**To complete the PIDs (ARIB)**

1. In the Output section, open the PID Control section.

1. Complete the ARIB Captions field and the ARIB Captions PID field as follows:

[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/elemental-live/latest/ug/complete-the-pids-for-arib.html)

# Complete the PIDs for DVB-Sub
PIDS for DVB-Sub

This section applies when you set up the captions encode as described in [Step 1: Identify the source captions that you want](identify-captions-in-the-input.md), if the output group is UDP/TS and the output captions format is DVB-Sub. It describes how to complete the PIDs for the output that contains these captions.

**To complete the PIDs (DVB-Sub)**

1. In the Output section, open the PID Control section.

1. In the **DVB Subtitle PIDs** field, enter the PID for the DVB-Sub caption in the stream for this output. Or leave the default.

# Complete the PIDs for Teletext
PIDS for Teletext

This section applies when you set up the captions encode as described in [Step 1: Identify the source captions that you want](identify-captions-in-the-input.md), if the output group is UDP/TS and the output captions format is teletext. It describes how to complete the PIDs for the output that contains these captions.

**To complete the PIDs (teletext)**

1. In the Output section, open the PID Control section.

1. In the **DVB Teletext PID** field, enter the PID for the Teletext caption in the stream for this output. Or leave the default.

# Set up the HLS Manifest (embedded captions)


This section applies when you set up the captions encode as described in [Step 1: Identify the source captions that you want](identify-captions-in-the-input.md), if the output group is HLS and the output captions format is embedded. It describes how to include captions language information in the manifest. 

**To specify language information in the manifest**

1. In the HLS output group, go to the output. Click **Advanced**.

1. Complete **Caption Languages** as desired:
   + Omit: To omit any CLOSED-CAPTION lines in the manifest.
   + None: To include one CLOSED-CAPTION=None line in the manifest.
   + Insert: To insert one or more lines in the manifest.

1. If you chose **Insert**, more fields appear. Complete on more sets of fields.
   + You should complete as many fields as there are languages in this output.
   + The order in which you enter the languages must match the order of the captions in the source. For example, if the captions are in the order English, then French, then Spanish, then Portuguese, then set up CC1 as English, CC2 as French, and so on. If you do not order them correctly, the captions will be tagged with the wrong languages.

## "Caption Stream Incompatible" message


When you save the event, this validation message might appear:

Stream Caption Destination Type Is Incompatible With XX Output...

Typically, this error will occur because of the following scenario:
+ You have two outputs – perhaps HLS and DASH – that will have the same audio and video descriptions, which means you want them to share the same stream:
+ You set up the HLS output group and add an Output and Stream 1. You add embedded captions.
+ You then set up the DASH output group and add an Output and associate that output with the existing Stream 1.
+ The problem is that DASH cannot contain embedded captions. Therefore, when you save the event, you will get the validation message.

The solution to this problem is:
+ When you set up the DASH output, instead of associating it with the existing Stream 1, create a new stream (Stream 2)
+ In Stream 2, set up the video and audio to be identical to the video and audio in Stream 1.
+ For the DASH output, add the captions in the appropriate way.

The result: Assuming that you have set up the video and audio in both streams to be identical, the encoder will notice that they are identical and will in fact encode the video only once and the audio only once. So there will be no extra video encoding load from creating separate streams.

# Sidecar captions or SMPTE-TT captions in MS Smooth


Follow this procedure if the format of the captions asset that you want to add is a sidecar, as identified in [Step 4: Match formats to categories](categories-captions.md), or if the format is SMPTE-TT for an MS Smooth output.

When you follow this procedure, you set up each captions asset in its own output within the output group. When the event runs, the captions will be set up as sidecars in the output package, except for SMPT-TT captions in MS Smooth, which will be set up as streams in the output package.

**To create captions (sidecar and SMPTE-TT)**

1. On the web interface, on the **Event** screen, click the output group. (You should have already created this output group).

1. In the output group, choose **Add Output**. A new output appears, and by default this output has one stream. Make note of this stream. For example, **Stream 2**.

1. In the **Streams** section (for example, in **Stream 2**), hover over **Video** and choose the **x** icon. Hover over **Audio** and choose the **x** icon. The stream is now empty. 

1. Beside **Captions**, choose the **\$1** icon. The stream now contains one captions encode and no video or audio encodes.

1. Complete the fields as shown in the table after this procedure.

1. Repeat these steps to create more sidecar captions in this or another output group, as applicable.

1. When you are ready, save the event.

   If the “Caption Stream Incompatible” message appears, see ["Caption Stream Incompatible" message](output-embedded-and-more.md#embedded-caption-incompatible-message).    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/elemental-live/latest/ug/output-sidecar-and-smptett-mss.html)

## "Caption Stream Incompatible" message


When you save the event, this validation message might appear:

Stream Caption Destination Type Is Incompatible With XX Output...

Typically, this error will occur because of the following scenario:
+ You have two outputs – perhaps HLS and RTMP – that will have the same audio and video descriptions, which means you want them to share the same stream:
+ You set up the HLS output group and add an Output and Stream 1. You add embedded captions.
+ You then set up the RTMP output group and add an Output and associate that output with the existing Stream 1.
+ The problem is that RTMP cannot contain embedded captions. Therefore, when you save the event, you will get the validation message.

The solution to this problem is:
+ When you set up the RTMP output, instead of associating it with the existing Stream 1, create a new stream (Stream 2)
+ In Stream 2, set up the video and audio to be identical to the video and audio in Stream 1.
+ For the RTMP output, add the captions in the appropriate way.

The result: Assuming that you have set up the video and audio in both streams to be identical, the encoder will notice that they are identical and will in fact encode the video only once and the audio only once. So there will be no extra video encoding load from creating separate streams.

# TTML captions wrapped in ID3 data


Follow this procedure to produce an output that includes TTML captions wrapped in ID3 data. This format is supported only in an MSS output. Unlike unwrapped TTML captions (which you create as described in [Sidecar captions or SMPTE-TT captions in MS Smooth](output-sidecar-and-smptett-mss.md)), these captions are included as an ID3 object in the same stream as the video. 

**To produce TTML captions wrapped in ID3 data**

1. On the web interface, on the Event screen, click the appropriate output group. 

1. In the output group, go to the output where you want to add captions. 

1. Identify the stream that is associated with that output. In this example, there are two outputs; the first is associated with stream 1, the second is associated with stream 2.

1. Go to that Stream section. For example, go to Stream 1.

1. Click the \$1 beside Caption to add a Caption section.

1. Complete the fields as shown in the table that follows this procedure.

1. Repeat these steps to add more captions for this output. For example, to add captions in another language.

1. Go to the MSS output group and output that this stream belongs to. Set the Stream field in that output to match the stream you created. For example:

1. When you are ready, save the event.

   If the “Caption Stream Incompatible” message appears, see ["Caption Stream Incompatible" message](output-sidecar-and-smptett-mss.md#sidecar-caption-incompatible-message).


|  Field | Description | 
| --- | --- | 
| Caption Source | Select the Caption Selector you created when [specifying the input captions](create-caption-selectors.md).  | 
| Destination Type | Select the caption type. This type must be valid for your output type as per the relevant Supported Captions table. | 
|  Pass Style Information |  Applicable only if the source caption type is an Embedded combination (Embedded, Embedded\$1SCTE-20, SCTE-20\$1Embedded), or Teletext, or TTML, or SMPTE-TT, or CCF-TT. The choices are: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/elemental-live/latest/ug/output-ttml-in-id3.html) (For other source caption types, the output is always simplified.)  | 
| Language | Complete if desired. This information may be useful to or required by a downstream system. | 
| Description | This field is automatically completed after you specify the language. | 
|  Use ID3 as Caption Content | Check this field, to insert the TTML captions into ID3 data. | 

# Setting up for 608 XDS data


If your source content includes 608 XDS data, you can set up the event to include it or strip it from the output. 

The Extended Data Services (XDS or EDS) standard is part of EIA-608 and allows for the delivery of ancillary data. 

**Note**  
You set up handling of this source data for the entire event, so you set up to either include it in every output and stream, or you set up to exclude it from every output and stream.

**To configure handling of this data**

1. In the Input section of the event, click **Advanced**.

1. Click the **Add Caption Selector** button.

1. Set the source to **Null**.

   You only need to create one Caption Selector for 608 XDS data, regardless of the number of outputs you are creating.

1. If you also want to extract regular captions, create more Caption Selectors according to the regular procedure.

1. In the **Global Processors** section, turn on **608 Extended Data Services** and complete the fields as desired.

**Note**  
No setup is required in the captions section of the output or the streams. 