

# Step 5: Create captions encodes


Go through the list of outputs you created and set up the captions in each output group, one by one.

Follow the procedure that applies to the format category of the captions output.

**Topics**
+ [

# All captions except sidecar or SMPTE-TT in MS Smooth
](output-embedded-and-more.md)
+ [

# Sidecar captions or SMPTE-TT captions in MS Smooth
](output-sidecar-and-smptett-mss.md)
+ [

# TTML captions wrapped in ID3 data
](output-ttml-in-id3.md)
+ [

# Setting up for 608 XDS data
](608-xds-handling.md)

# All captions except sidecar or SMPTE-TT in MS Smooth


Follow this procedure if the format of the captions asset that you want to add belongs to the category of embedded, burn-in, or object. You will set up the captions and video and audio in the same output.

**To create captions (*not* sidecar or SMPTE-TT)**

1. On the web interface, on the **Event** screen, click the appropriate output group.

1. If you have already set up this output group with video and audio, find the outputs where you want to add the captions. Or if you have not set up with video and audio, create a new output in this output group; you can set up the captions now and you can set up the video and audio later.

1. Go to the output, then go to the stream that is associated with that output. For example, go to Stream 1.

1. Click the \$1 beside **Caption** to add a Caption section.

1. Complete the fields that appear for the selected format. For details about a field, choose the Info link beside the field.     
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/elemental-live/latest/ug/output-embedded-and-more.html)

1. If the output format is embedded and the output group is HLS, you can include captions language information in the manifest. You perform this setup in the output settings (separate from the captions encode). See [Set up the HLS Manifest (embedded captions)](set-up-the-hls-manifest.md).

1. If the output format is ARIB or DVB-Sub or SCTE-27, you must perform some extra setup in the output settings (separate from the captions encode). See [PIDS for ARIB output](complete-the-pids-for-arib.md) or [PIDs for DVB-Sub output](complete-the-pids-for-dvb-sub.md) or [PIDs for teletext output](complete-the-pids-for-teletext.md).

1. You now have a captions encode that is fully defined.

1. Repeat these steps to create captions, as applicable.

1. Go to the output group and output that this stream belongs to. Set the Stream field in that output to match the stream you created. 

1. When you are ready, save the event. 

   If the “Caption Stream Incompatible” message appears, see ["Caption Stream Incompatible" message](#embedded-caption-incompatible-message).

# Font styles for Burn-in or DVB-Sub Captions
Font styles

When you set up the captions encode as described in [All captions except sidecar or SMPTE-TT in MS Smooth](output-embedded-and-more.md), you can specify the appearance of the captions if the output captions are Burn-in or DVB-Sub. In the following table, the first column shows the field name, the third column specifies how to complete the field, and the third column specifies whether the description applies to Burn-in or DVB-Sub.


|  Name  |  Description  | Applicability | 
| --- | --- | --- | 
|  Font  |  Click **Browse** to find a font file to use. The file must be on a server mounted to the node and must have the extension TTF or TTE.  Do not specify a font file if the caption source is embedded or teletext.  | Both | 
|  Font Size  |  Specify **auto** or enter a number. When set to auto, font\$1size will scale depending on the size of the output. Giving a positive integer will specify the exact font size in points.  | Both | 
|  Font Resolution  | Font resolution in DPI (dots per inch). Range: 96 to 600. Default is 96 dpi. | Both | 
|  Text Justify  | For conversions from STL to Burn-in: This field is ignored; the justification specified in the input STL file is always used. For all other conversions to Burn-in: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/elemental-live/latest/ug/font-styles-for-burn-in-or-dvbsub.html)  | Burn-in | 
| Text Justify |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/elemental-live/latest/ug/font-styles-for-burn-in-or-dvbsub.html)  | DVB-Sub | 
|  X Position  | For conversions from STL to Burn-in: This field is ignored; the position specified in the input STL file is always used. For all other conversions to Burn-in: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/elemental-live/latest/ug/font-styles-for-burn-in-or-dvbsub.html) | Burn-in | 
|  X Position  | Offset for the left edge of the caption relative to the horizontal axis of the video frame, in pixels. 0 is the left edge of the video frame. 10 pixels means offset 10 pixels to the right.Empty means 0 offset. | DVB-Sub | 
|  Y Position  | For conversions from STL to Burn-in: This field is ignored; the position specified in the input STL file is always used.For all other conversions to Burn-in:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/elemental-live/latest/ug/font-styles-for-burn-in-or-dvbsub.html) | Burn-in | 
|  Y Position  | Offset of the top edge of the caption relative to the vertical axis of the video frame, in pixels. 0 is the top edge of the video frame. 10 pixels means offset 10 pixels from the top. Empty means position the captions towards the bottom of the output.  | DVB-Sub | 
| Fixed Grid |  Applies only for conversions from Teletext. [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/elemental-live/latest/ug/font-styles-for-burn-in-or-dvbsub.html) (Note that for conversions from STL to Burn-in, the font is always mono-spaced; this information is never in the STL and the value in the field is ignored.)  | Burn-in | 
| Fixed Grid |  Applies only for conversions from Teletext. [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/elemental-live/latest/ug/font-styles-for-burn-in-or-dvbsub.html) (Note that for conversions from STL to DVB-Sub, the font is always mono-spaced; this information is never in the STL and the value in the field is ignored.)  | DVB-Sub | 
| Font Color |  For conversions from STL: The font color is taken from the STL file but the value in Font Color is used to override as follows: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/elemental-live/latest/ug/font-styles-for-burn-in-or-dvbsub.html) For all other conversions to Burn-in: Select the desired color.  | Burn-in | 
| Font Color | Select the desired color. | DVB-Sub | 
| Font Opacity | The opacity for the font color. Range 0 (transparent) to 255 (opaque). | Both | 
| Background Color | The color for the background rectangle. | Both | 
| Background Opacity | The opacity for the background rectangle. Range 0 (transparent) to 255 (opaque). | Both | 
| Outline Size | The size for the font outline, in pixels. Range 0 (no outline) to 10. | Both | 
| Outline Color | The color for the font outline. | Both | 
| Shadow Color | The color for the shadow cast by the captions. | Both | 
| Shadow Opacity | The opacity of the shadow, in pixels. Range 0 (transparent) to 255 (opaque). Empty means 0. | Both | 
| Shadow X Offset | The horizontal offset of the shadow, in pixels. A value of -2 results in a shadow offset 2 pixels to the left. A value of 2 results in a shadow offset 2 pixels to the right. | Both | 
| Shadow Y Offset | The vertical offset of the shadow, in pixels. A value of -2 results in a shadow offset 2 pixels above the text. A value of 2 results in a shadow offset 2 pixels above the text. | Both | 

**Font Styles When You Use the Same Source in Several Outputs**

If you are using the same caption source in several Stream sections (in other words, you are selecting the same Caption Selector in the Caption Source field in several Stream sections), then you must set up the font style information identically in each Stream section. If you do not, you will get an error when you save the event.

For example, stream 1 may use Caption Selector 1 with the Destination Type set to Burn-in. And stream 2 may also use Caption Selector 1 with the Destination Type set to Burn-in. You set the font information once in stream 1 and again in stream 2. You must make sure to set up all the font information identically in both streams.

The same rule applies if the output captions are all DVB-Sub.

# Complete the PIDs for ARIB
PIDS for ARIB

This section applies when you set up the captions encode as described in [Step 1: Identify the source captions that you want](identify-captions-in-the-input.md), if the output group is UDP/TS and the output captions format is ARIB. It describes how to complete the PIDs for the output that contains these captions.

**To complete the PIDs (ARIB)**

1. In the Output section, open the PID Control section.

1. Complete the ARIB Captions field and the ARIB Captions PID field as follows:

[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/elemental-live/latest/ug/complete-the-pids-for-arib.html)

# Complete the PIDs for DVB-Sub
PIDS for DVB-Sub

This section applies when you set up the captions encode as described in [Step 1: Identify the source captions that you want](identify-captions-in-the-input.md), if the output group is UDP/TS and the output captions format is DVB-Sub. It describes how to complete the PIDs for the output that contains these captions.

**To complete the PIDs (DVB-Sub)**

1. In the Output section, open the PID Control section.

1. In the **DVB Subtitle PIDs** field, enter the PID for the DVB-Sub caption in the stream for this output. Or leave the default.

# Complete the PIDs for Teletext
PIDS for Teletext

This section applies when you set up the captions encode as described in [Step 1: Identify the source captions that you want](identify-captions-in-the-input.md), if the output group is UDP/TS and the output captions format is teletext. It describes how to complete the PIDs for the output that contains these captions.

**To complete the PIDs (teletext)**

1. In the Output section, open the PID Control section.

1. In the **DVB Teletext PID** field, enter the PID for the Teletext caption in the stream for this output. Or leave the default.

# Set up the HLS Manifest (embedded captions)


This section applies when you set up the captions encode as described in [Step 1: Identify the source captions that you want](identify-captions-in-the-input.md), if the output group is HLS and the output captions format is embedded. It describes how to include captions language information in the manifest. 

**To specify language information in the manifest**

1. In the HLS output group, go to the output. Click **Advanced**.

1. Complete **Caption Languages** as desired:
   + Omit: To omit any CLOSED-CAPTION lines in the manifest.
   + None: To include one CLOSED-CAPTION=None line in the manifest.
   + Insert: To insert one or more lines in the manifest.

1. If you chose **Insert**, more fields appear. Complete on more sets of fields.
   + You should complete as many fields as there are languages in this output.
   + The order in which you enter the languages must match the order of the captions in the source. For example, if the captions are in the order English, then French, then Spanish, then Portuguese, then set up CC1 as English, CC2 as French, and so on. If you do not order them correctly, the captions will be tagged with the wrong languages.

## "Caption Stream Incompatible" message


When you save the event, this validation message might appear:

Stream Caption Destination Type Is Incompatible With XX Output...

Typically, this error will occur because of the following scenario:
+ You have two outputs – perhaps HLS and DASH – that will have the same audio and video descriptions, which means you want them to share the same stream:
+ You set up the HLS output group and add an Output and Stream 1. You add embedded captions.
+ You then set up the DASH output group and add an Output and associate that output with the existing Stream 1.
+ The problem is that DASH cannot contain embedded captions. Therefore, when you save the event, you will get the validation message.

The solution to this problem is:
+ When you set up the DASH output, instead of associating it with the existing Stream 1, create a new stream (Stream 2)
+ In Stream 2, set up the video and audio to be identical to the video and audio in Stream 1.
+ For the DASH output, add the captions in the appropriate way.

The result: Assuming that you have set up the video and audio in both streams to be identical, the encoder will notice that they are identical and will in fact encode the video only once and the audio only once. So there will be no extra video encoding load from creating separate streams.

# Sidecar captions or SMPTE-TT captions in MS Smooth


Follow this procedure if the format of the captions asset that you want to add is a sidecar, as identified in [Step 4: Match formats to categories](categories-captions.md), or if the format is SMPTE-TT for an MS Smooth output.

When you follow this procedure, you set up each captions asset in its own output within the output group. When the event runs, the captions will be set up as sidecars in the output package, except for SMPT-TT captions in MS Smooth, which will be set up as streams in the output package.

**To create captions (sidecar and SMPTE-TT)**

1. On the web interface, on the **Event** screen, click the output group. (You should have already created this output group).

1. In the output group, choose **Add Output**. A new output appears, and by default this output has one stream. Make note of this stream. For example, **Stream 2**.

1. In the **Streams** section (for example, in **Stream 2**), hover over **Video** and choose the **x** icon. Hover over **Audio** and choose the **x** icon. The stream is now empty. 

1. Beside **Captions**, choose the **\$1** icon. The stream now contains one captions encode and no video or audio encodes.

1. Complete the fields as shown in the table after this procedure.

1. Repeat these steps to create more sidecar captions in this or another output group, as applicable.

1. When you are ready, save the event.

   If the “Caption Stream Incompatible” message appears, see ["Caption Stream Incompatible" message](output-embedded-and-more.md#embedded-caption-incompatible-message).    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/elemental-live/latest/ug/output-sidecar-and-smptett-mss.html)

## "Caption Stream Incompatible" message


When you save the event, this validation message might appear:

Stream Caption Destination Type Is Incompatible With XX Output...

Typically, this error will occur because of the following scenario:
+ You have two outputs – perhaps HLS and RTMP – that will have the same audio and video descriptions, which means you want them to share the same stream:
+ You set up the HLS output group and add an Output and Stream 1. You add embedded captions.
+ You then set up the RTMP output group and add an Output and associate that output with the existing Stream 1.
+ The problem is that RTMP cannot contain embedded captions. Therefore, when you save the event, you will get the validation message.

The solution to this problem is:
+ When you set up the RTMP output, instead of associating it with the existing Stream 1, create a new stream (Stream 2)
+ In Stream 2, set up the video and audio to be identical to the video and audio in Stream 1.
+ For the RTMP output, add the captions in the appropriate way.

The result: Assuming that you have set up the video and audio in both streams to be identical, the encoder will notice that they are identical and will in fact encode the video only once and the audio only once. So there will be no extra video encoding load from creating separate streams.

# TTML captions wrapped in ID3 data


Follow this procedure to produce an output that includes TTML captions wrapped in ID3 data. This format is supported only in an MSS output. Unlike unwrapped TTML captions (which you create as described in [Sidecar captions or SMPTE-TT captions in MS Smooth](output-sidecar-and-smptett-mss.md)), these captions are included as an ID3 object in the same stream as the video. 

**To produce TTML captions wrapped in ID3 data**

1. On the web interface, on the Event screen, click the appropriate output group. 

1. In the output group, go to the output where you want to add captions. 

1. Identify the stream that is associated with that output. In this example, there are two outputs; the first is associated with stream 1, the second is associated with stream 2.

1. Go to that Stream section. For example, go to Stream 1.

1. Click the \$1 beside Caption to add a Caption section.

1. Complete the fields as shown in the table that follows this procedure.

1. Repeat these steps to add more captions for this output. For example, to add captions in another language.

1. Go to the MSS output group and output that this stream belongs to. Set the Stream field in that output to match the stream you created. For example:

1. When you are ready, save the event.

   If the “Caption Stream Incompatible” message appears, see ["Caption Stream Incompatible" message](output-sidecar-and-smptett-mss.md#sidecar-caption-incompatible-message).


|  Field | Description | 
| --- | --- | 
| Caption Source | Select the Caption Selector you created when [specifying the input captions](create-caption-selectors.md).  | 
| Destination Type | Select the caption type. This type must be valid for your output type as per the relevant Supported Captions table. | 
|  Pass Style Information |  Applicable only if the source caption type is an Embedded combination (Embedded, Embedded\$1SCTE-20, SCTE-20\$1Embedded), or Teletext, or TTML, or SMPTE-TT, or CCF-TT. The choices are: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/elemental-live/latest/ug/output-ttml-in-id3.html) (For other source caption types, the output is always simplified.)  | 
| Language | Complete if desired. This information may be useful to or required by a downstream system. | 
| Description | This field is automatically completed after you specify the language. | 
|  Use ID3 as Caption Content | Check this field, to insert the TTML captions into ID3 data. | 

# Setting up for 608 XDS data


If your source content includes 608 XDS data, you can set up the event to include it or strip it from the output. 

The Extended Data Services (XDS or EDS) standard is part of EIA-608 and allows for the delivery of ancillary data. 

**Note**  
You set up handling of this source data for the entire event, so you set up to either include it in every output and stream, or you set up to exclude it from every output and stream.

**To configure handling of this data**

1. In the Input section of the event, click **Advanced**.

1. Click the **Add Caption Selector** button.

1. Set the source to **Null**.

   You only need to create one Caption Selector for 608 XDS data, regardless of the number of outputs you are creating.

1. If you also want to extract regular captions, create more Caption Selectors according to the regular procedure.

1. In the **Global Processors** section, turn on **608 Extended Data Services** and complete the fields as desired.

**Note**  
No setup is required in the captions section of the output or the streams. 