

# Planning the outputs in the channel
<a name="planning-the-channel-in-workflow"></a>

You should plan the AWS Elemental MediaLive channel as the second stage of planning a transcoding *workflow*. You should have already performed the first stage of setting up the upstream and downstream systems, as described in [Preparing the upstream and downstream systems in a workflow](container-planning-uss-dss.md).

The channel provides the ability to configure for different characteristics of the outputs, and for including a wide array of video features. But before you plan these details, you should plan the basic features for the channel.

**Note**  
On the output side, we refer to each video or audio or caption stream, track, or program as an *encode*.

**Topics**
+ [Identify the output encodes](planning-encodes.md)
+ [Map the output encodes to the sources](channel-map-output-source.md)
+ [Design the encodes](designing-encodes.md)

# Identify the output encodes
<a name="planning-encodes"></a>

When you prepared the downstream systems, you [identified the output groups](identify-downstream-system.md) that you need. Now, as part of the planning of the channel, you must identify the encodes to include in each output group you have decided to create. An *encode* refers to the audio, video, or captions streams in the output.

**Topics**
+ [Identify the video encodes](channel-planning-video-encodes.md)
+ [Identify the audio encodes](channel-planning-audio-encodes.md)
+ [Identify the captions encodes](channel-planning-captions-encodes.md)
+ [Summary of encode rules for output groups](encode-rules.md)
+ [Example of a plan for output encodes](plan-encodes-example.md)

# Identify the video encodes
<a name="channel-planning-video-encodes"></a>

You must decide on the number of video encodes and their codecs. Follow this procedure for each output group. 

1. Determine the maximum number of encodes that are allowed in the output group. The following rules apply for each type of output group.  
****    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/medialive/latest/ug/channel-planning-video-encodes.html)

1. If the output group allows more than one video encode, decide how many you want. Keep in mind that you can create multiple output encodes from the single video source that MediaLive ingests.

1. Identify the codec or codecs for the video encodes. 
   + For most types of output groups, the downstream system dictates the codec for each video encode, so you obtained this information when you [identified the output encodes](#channel-planning-video-encodes). 
   + For an Archive output group, you decide which codec suits your purposes.

1. Identify the resolution and bitrate for each video encode. You might have obtained requirements or recommendations from your downstream system when you [identified the output encodes](#channel-planning-video-encodes).

1. Identify the frame rates for each video encode. If you are using more than one video encode, you can ensure compatibility by choosing output frame rates that are multiples of the lowest frame rate used. 

   Examples:
   + 29.97 and 59.94 frames per second are compatible frame rates.
   + 15, 30, and 60 frames per second are compatible frame rates.
   + 29.97 and 30 frames per second are *not* compatible frame rates.
   + 30 and 59.94 frames per second are *not* compatible frame rates. 

    

# Identify the audio encodes
<a name="channel-planning-audio-encodes"></a>

You must decide on the number of audio encodes. Follow this procedure for each output group. 

1. Determine the maximum number of encodes that are allowed in the output group. The following rules apply for each type of output group.  
****    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/medialive/latest/ug/channel-planning-audio-encodes.html)

1. If the output group allows more than one audio encode, decide how many you want. These guidelines apply:
   + Each different combination of output codec, coding mode, and language is one encode.

     MediaLive can produce a specific coding mode only if the source contains that coding mode or a higher mode. For example, MediaLive can create 1.0 from a 1.0 or a 2.0 source. It can't create 5.1 from a 2.0 source. 
   + MediaLive can produce a specific language only if the source contains that language. 
   + MediaLive can produce more than one encode for a given language. 

     For example, you could choose to include Spanish in Dolby 5.1 and in AAC 2.0.
   + There is no requirement for the count of encodes to be the same for all languages. For example, you could create two encodes for Spanish, and only one encode for the other languages.

1. Identify the bitrate for each audio encode. You might have obtained requirements or recommendations from your downstream system when you [identified the output encodes](#channel-planning-audio-encodes). 

# Identify the captions encodes
<a name="channel-planning-captions-encodes"></a>

You must decide on the number of captions encodes. Follow this procedure for each output group. 

1. Determine the maximum number of captions encodes that are allowed in the output group. The following rules apply for each type of output group.  
****    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/medialive/latest/ug/channel-planning-captions-encodes.html)

1. Identify the category that each caption format belongs to. See the list in [Captions categories](categories-captions.md). For example, WebVTT captions are sidecar captions.

1. Use this category to identify the number of captions encodes you need in the output group.
   + For embedded captions, you always create one captions encode.
   + For object-style captions and sidecar captions, you create one captions encode for each format and language that you want to include.

# Summary of encode rules for output groups
<a name="encode-rules"></a>

 This table summarizes the rules for encodes for each output group. In the first column, find the output group that you want, then read across the row.


****  

| Type of output group | Rule for video encodes | Rule for audio encodes | Rule for captions encodes | 
| --- | --- | --- | --- | 
| Archive | One or more video encodes. | Zero or more audio encodes. | Zero or more captions encodes. The captions are either embedded or object-style captions. | 
| CMAF Ingest | One or more video encodes. Typically, there are multiple video encodes. | Zero or more audio encodes. Typically, there are multiple audio encodes.  | Zero or more captions encodes. Typically, there are caption languages to match the audio languages. The captions are embedded or sidecar captions. | 
| Frame Capture | One video encode. | Zero audio encodes. | Zero captions encodes. | 
| HLS or MediaPackage | One or more video encodes. Typically, there are multiple video encodes. | Zero or more audio encodes. Typically, there are multiple audio encodes.  | Zero or more captions encodes. Typically, there are caption languages to match the audio languages. The captions are either embedded or sidecar captions. | 
| Microsoft Smooth | One or more video encodes. Typically, there are multiple video encodes. | Zero or more audio encodes. Typically, there are multiple audio encodes.  | Zero or more captions encodes. Typically, there are caption languages to match the audio languages. The captions are always sidecar captions. | 
| RTMP |  One video encode.  | Zero or one audio encodes.  | Zero or one caption encodes. The captions are either embedded or object-style captions. | 
| SRT caller |  One or more video encodes.  | One or more audio encodes. | Zero or more captions encodes. The captions are either embedded or object-style captions. | 
| UDP |  One or more video encodes.   | One or more audio encodes.  | Zero or more captions encodes. The captions are either embedded or object-style captions. | 

Some output groups also support audio-only outputs. See [Setting up the output](audio-only-outputs-and-outputgroups.md).

Some output groups also support outputs that contain JPEG files, to support trick play according to the Roku specification. See [Trick-play track via the Image Media Playlist specification](trick-play-roku.md).

# Example of a plan for output encodes
<a name="plan-encodes-example"></a>

After you have performed this procedure, you should have information that looks like this example.


**Example**  
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/medialive/latest/ug/plan-encodes-example.html)

# Map the output encodes to the sources
<a name="channel-map-output-source"></a>

In the first step of planning the channel, you identified the number of encodes you need in each output group. You must now determine which assets from the source you can use to produce those encodes.

**Result of this procedure**  
After you have performed this procedure, you will have identified the following key components that you will create in the channel:
+ The video input selectors 
+ The audio input selectors
+ The captions input selectors

Identifying these components is the last step in planning the *input* side of the channel. 

**To map the output to the sources**

1. Obtain the *list of output encodes* you want to produce. You created this list in the [previous step](planning-encodes.md). It is useful to organize this list into a table. For example:  
**Example**    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/medialive/latest/ug/channel-map-output-source.html)

1. Obtain the *list of sources* that you created when you assessed the source content and collected identifiers. For an example of such a list, see [Assess the upstream system](evaluate-upstream-system.md).

1. In your table of output encodes, add two more columns, labeled *Source* and *Identifier in source*. 

1. For each encode (column 2), find a line in the *list of sources* that can produce that encode. Add the source codec and the identifier of that source codec. This example shows a completed table.  
**Example**    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/medialive/latest/ug/channel-map-output-source.html)

   You will use this information when you create the channel:
   + You will use the source and source identifier information when you [create the input selectors](input-video-selector.md).
   + You will use the characteristics information when you [create the encodes](creating-a-channel-step6.md) in the output groups.

1. After you have identified the source assets, group those assets that are being used more than once, to remove the duplicates.

1. Label each asset by its type—video, audio, or captions.  
**Example**    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/medialive/latest/ug/channel-map-output-source.html)

## Example of mapping
<a name="channel-map-example"></a>

The following diagrams illustrate the mapping of the output encodes back to source assets. The first diagram shows the outputs (at the top) and the sources (at the bottom). The other three diagrams shows the same outputs and sources with the mappings for video, for audio, and for captions.

**Encodes and assets**

![\[Diagram showing HLS, RTMP, and Archive sections with various video, audio, and caption sources.\]](http://docs.aws.amazon.com/medialive/latest/ug/images/channel-design-map-in-out.png)


**Mapping video encodes to assets**

![\[Diagram showing video, audio, and caption sources mapped to HLS, RTMP, and Archive outputs.\]](http://docs.aws.amazon.com/medialive/latest/ug/images/channel-design-map-in-out-V.png)


**Mapping audio encodes to assets**

![\[Diagram showing audio and video sources mapped to HLS, RTMP, and Archive outputs.\]](http://docs.aws.amazon.com/medialive/latest/ug/images/channel-design-map-in-out-A.png)


**Mapping captions encodes to assets**

![\[Diagram showing video, audio, and caption sources mapped to HLS, RTMP, and Archive outputs.\]](http://docs.aws.amazon.com/medialive/latest/ug/images/channel-design-map-in-out-C.png)


# Design the encodes
<a name="designing-encodes"></a>

In the first step of planning the channel, you [identified](planning-encodes.md) the video, audio, and captions encodes to include in each output group. In the second step, you organized these encodes into outputs in each output group. 

Now in this third step, you must plan the configuration parameters for each encode. As part of this plan, you identify opportunities for sharing encodes among outputs in the same output group in the channel, and among outputs in different output groups in the channel.

**Result of this procedure**  
After you have performed this procedure, you will have a list of video, audio, and captions encodes to create.

**Topics**
+ [Plan the encodes](plan-encodes.md)
+ [Identify encode sharing opportunities](plan-encode-sharing.md)

# Plan the encodes
<a name="plan-encodes"></a>

In [Map the output encodes to the sources](channel-map-output-source.md), you sketched out a plan for the encodes you want to create in each output group. Below is the example of the plan from that step, showing the outputs and encodes, and the sources for those encodes.

At some point, you must fill in the details for the encodes identified in the second and third columns of this table. You have a choice:
+ You can decide these details now. 
+ You can decide the details later, when you are actually creating the channel. If you decide to do this, we recommend you still read the procedures after the table, to get an idea of what is involved in defining an encode.


**Example**  
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/medialive/latest/ug/plan-encodes.html)

**Design the details for each video encode**

For each video encode in your table, you have already identified the source asset, codec, resolution and bitrate. You must now identify all the other encoding parameters you need to set.

Follow this procedure for each individual video encode.

1. Look at the fields in the video encode section of each output. To view these fields, follow these steps. Don't worry about not completing all the sections. You only want to display the video encode fields, and you will then cancel the channel.
   + On the MediaLive home page, choose **Create channel**, and in the navigation pane, choose **Channels**. 

     If you've created a channel before, you won't see the home page. In that case, in the MediaLive navigation pane, choose **Channels**, and then choose **Create channel**.
   + On the **Create channel** page, under **Output groups**, choose **Add**. 

     Don't worry that you haven't completed any of the earliers sections in the channel. You are only trying to display all the fields for the video encode.
   + In the **Add output group** section, choose **HLS** and choose **Confirm**.
   + Under that output group, choose **Output 1**.
   + In the **Output** section, go to the **Stream settings** section, and choose the **Video** link. 
   + In the **Codec settings** field, choose the codec that you want for this video encode. More fields appear. Choose the field labels for all the sections to display all the fields.

1. In each section, determine whether you need to change the defaults. 
   + Many of the fields have defaults, which means you can leave the field value as is. For details about a field and its default value, choose the **Info** link next to the field.
   + There are some fields that you might need to set according to instructions from your downstream system, to match the expectations of the downstream system.
   + There are some fields where the value you enter affects the output charges for this channel. These are:
     + The **Width** and **Height** fields (which define the video resolution).
     + The **Framerate** fields.
     + The **Rate control** fields.

     For information about charges, see [the MediaLive price list](https://aws.amazon.com/medialive/pricing/).
   + You can read about some of the fields in the following sections:
     + For information about the **Color space** fields, see [Handling complex color space conversions](color-space.md).
     + For information about the Additional encoding settings fields, see [Setting up enhanced VQ mode](video-enhancedvq.md)
     + For information about the **Rate control** fields, see [Setting the rate control mode](video-encode-ratecontrol.md). There are fields in this section that affect the output charges for this channel. For more information about charges, see [the MediaLive price list](https://aws.amazon.com/medialive/pricing/).
     + For information about the **Timecode** fields, see [Working with timecodes and timestamps](timecode.md).

1. Make detailed notes about the values for all the fields you plan to change. Do this for every video encode that you identified.

**Design the details for each audio encode**

For each audio encode in your table, you have already identified the source asset, codec and bitrate. You must now identify all the other encoding parameters you need to set.

Follow this procedure for each individual audio encode.

1. Look at the fields in the audio encode section of each output. To view these fields, follow the same steps as for the video encodes, but choose the **Audio 1** link. 

   With audio encodes, there aren't many fields for each code. But the fields for the codecs are very different from each other.

1. Study the fields and make notes. 

**Design the details for each captions encode**

For each captions encode in your table, you have already identified the source captions, format, and language. You must now identify all the other encoding parameters you need to set.

Follow this procedure for each individual captions encode.

1. Look at the fields in the captions encode section of each output. To view these fields, follow the same steps as for the video encodes, but choose Add caption to add a captions section, because there is no captions section by default. 

   With captions encodes, there aren't many fields for each captions format. But the fields for the formats are very different from each other.

1. Study the fields and make notes. 

# Identify encode sharing opportunities
<a name="plan-encode-sharing"></a>

If you have already identified the details for all the output encodes, you can now identify opportunities for encode sharing. 

If you plan to identify details later, we recommend that you come back to this section to identify opportunties. 

Read about encode sharing and encode cloning in [Sharing encodes among outputs](feature-share-encode.md).

You will use encode sharing and encode cloning when you create the encodes in the channel, starting with [Set up the video encode](creating-a-channel-step6.md).
+ When you have a complete list, compare the values for the encodes:
  + If you have two (or more) encodes with identical values, you can share the encode. When you create the channel, you can create this encode once, in one output. You can then reuse that encode in other outputs. The procedure for creating the encode provides detailed instructions for reusing.

    Keep in mind that two encodes are identical only if they are identical in all their fields, including sharing the same video source. For example, in the sample table earlier in this section, the first video encode for HLS and the video encode for RTMP share the same video source.
  + If you have two (or more) encodes with nearly identical values, you can clone an encode to create a second encode, and then change specific fields in the second encode. The procedure for creating the encode provides detailed instructions for cloning.

  Then identify opportunities for sharing, in the same way as you did for the video encodes. Keep in mind that two encodes are identical only if they are identical in all their fields, including sharing the same audio source. 

  Carefully identify the video encodes to share by noting the outputs and output groups each belongs to.

Then identify opportunities for sharing, in the same way as you did for the video encodes. Keep in mind that two encodes are identical only if they are identical in all their fields, including sharing the same captions source. 

**Example**

Following from the example in the earlier steps in this section about channel planning, you might decide you have these opportunities shown in the last two columns of this table.


****  

| Encode nickname |  Characteristics of the encode  | Source | Opportunity | Action | 
| --- | --- | --- | --- | --- | 
| VideoA |  AVC 1920x1080, 5 Mbps  | HEVC  |  | Create this encode from scratch. | 
| VideoB |  AVC 1280x720, 3 Mbps  | HEVC  | Clone | Clone VideoA and change the bitrate. Perhaps also other fields. | 
| VideoC | AVC 320x240, 750 Kbps | HEVC  | Clone | Clone VideoA and change the bitrate and perhaps other fields. | 
| AudioA | AAC 2.0 in English at 192000 bps | AAC 2.0 |  | Create this encode from scratch. | 
| AudioB | AAC 2.0 in French at 192000 bps | AAC 2.0  | Clone | Clone AudioA and change the audio selector (the reference to the source) to the selector for French. Perhaps also change other fields. | 
| CaptionsA |  WebVTT (object-style) converted from embedded, in English  | Embedded |  | Create this encode from scratch. | 
| CaptionsB | WebVTT (object-style) converted from embedded, in French | Embedded | Clone | Clone CaptionsC and change the captions selector (the reference to the source) to the selector for French. Perhaps also change other fields. | 
| VideoD | AVC 1920x1080, 5Mbps  | HEVC  | Share | Share VideoA | 
| AudioC | Dolby Digital 5.1 in Spanish | Dolby Digital 5.1  |  | Create this encode from scratch. | 
| CaptionsC | RTMP CaptionInfo (converted from embedded) in Spanish | Embedded | Clone | Clone CaptionsA and change the captions selector (the reference to the source) to the selector for Spanish. Perhaps also change other fields. | 
| VideoE | AVC, 1920x1080, 5 Mbps | HEVC  | Share | Share VideoA | 
| AudioD | Dolby Digital 2.0 in Spanish  | AAC 2.0 |  | Create this encode from scratach. Although its source is the same as Aa, its output codec is different, which means all its configuration fields are different. Therefore, there is no advantage to cloning. | 
| AudioE | Dolby Digital 2.0 in French | AAC 2.0  | Clone | Clone AudioD and change the audio selector (the reference to the source) to the selector for French. Perhaps also change other fields.Don't clone AuduioB because AudioB and AudioA have different output codecs. Therefore, there is no advantage to cloning. | 
| AudioF | Dolby Digital 2.0 in English | AAC 2.0 | Clone | Clone AuduioD and change the audio selector (the reference to the source) to the selector for English. Perhaps also change other fields.Don't clone AudioB because AudioB and AudioF have different output codecs. Therefore, there is no advantage to cloning. | 
| CaptionsD | DVB-Sub (object-style) converted from Teletext, in 6 languages.  | Teletext |  | Create this encode from scratch. | 