

# Preparing the upstream and downstream systems in a workflow
<a name="container-planning-uss-dss"></a>

As the first stage in planning the workflow, you must set up the upstream and downstream systems. 

**Important**  
This procedure describes planning the workflow starting from the output and then working back to the input. This is the most effective way to plan a workflow.

**To plan the workflow**

1. Identify the output groups that you need to produce, based on the systems that are downstream of MediaLive. See [Identify the output group types for the downstream system](identify-downstream-system.md).

1. Identify the requirements for the video and audio encodes that you will include in each output group. See [Identify the encode requirements for the output groups](identify-dss-video-audio.md).

1. Decide on the channel class—decide if you want to create a standard channel that supports redundancy or a single-pipeline channel that doesn't support redundancy. See [Identify resiliency requirements](plan-redundancy.md).

1. Assess the source content to make sure it's compatible with MediaLive and with the outputs that you need to create. For example, make sure that the source content has a video codec that MediaLive supports. See [Assess the upstream system](evaluate-upstream-system.md).

   After you have performed these four steps, you know whether MediaLive can handle your transcoding request.

1. Collect identifiers for the source content. For example, ask the operator at the upstream system for the identifiers for the different audio languages that you want to extract from the content. See [Collect information about the source content](planning-content-extract.md).

1. Coordinate with the downstream system or systems to provide a destination for the output groups that MediaLive will produce. See [Coordinate with downstream systems](setting-up-downstream-system.md).

# Identify the output group types for the downstream system
<a name="identify-downstream-system"></a>

The first step in planning any AWS Elemental MediaLive workflow is to determine which types of [*output groups*](what-is-terminology.md) you need to produce, based on the requirements and capabilities of the systems that are downstream of MediaLive.

Perform this work with the downstream system before you assess the [upstream system](evaluate-upstream-system.md). Decision making in a workflow starts with the downstream system, then works back to the upstream system.

**Important**  
You should have already identified the downstream system or systems that you are going to send MediaLive output to, for this workflow. If you have not yet identified the downstream system, you must do some research before continuing with preparing your workflow. This guide can't help you to identify your downstream system. When you know what your downstream systems are, return to this section.

**To identify the output group**

1. Obtain the following information from your downstream system.
   + The required output formats. For example, HLS.
   + The application protocol for each. For example, HTTP.

1. Decide on the delivery mode for your outputs.
   + You might have an output that is on a server that is on your EC2 instance in your VPC. Or you might have an output that is in Amazon S3. If one or both of these situations apply, you might want to set up for delivery via your VPC. For more information, see [Delivering outputs via your VPC](delivery-out-vpc.md).
   + If you don't have any of these types of outputs, you will deliver in the regular way.

1. Make sure that MediaLive includes an *output group *that supports the output format and protocol that the downstream system requires. See [Output types supported in MediaLive](outputs-supported-containers.md). 

1. If your preferred downstream system is another AWS media service, [read this for information about choosing the service](dss-choose-service.md). 

1. If your downstream system supports Microsoft Smooth Streaming, see [Options for handling Microsoft Smooth output](downstream-system-for-mss.md) for options.

1. If you want to send your output to other AWS Regions or to other AWS accounts before distribution, consider creating a MediaConnect Router output group. MediaConnect Router is an excellent choice for workflows that require cross-region or cross-account distribution.

1. Decide if you want to create an Archive output group in order to produce an archive file of the content. An archive file is a supplement to streaming; it isn't itself a streaming output. Typically, you create an archive file as a permanent file version of the streaming output. 

1. Decide if you want to create a Frame capture output group in order to produce a frame capture output. A Frame capture output is a supplement to streaming; it isn't itself a streaming output. This type of output might be useful for your workflow. For example, you might use a Frame capture output to create thumbnails of the content. 

1. Make a note of the output groups that you decide to create.

   For example, after you have followed these steps, you might have this list of output groups:
   + One HLS output group with AWS Elemental MediaPackage as the downstream system. 
   + One RTMP output group sending to the downstream system of a social media site.
   + One Archive output group as a record.

**Topics**
+ [Choosing among the AWS media services](dss-choose-service.md)
+ [Choosing between the HLS output group and MediaPackage output group](hls-choosing-hls-vs-emp.md)
+ [Options for handling Microsoft Smooth output](downstream-system-for-mss.md)

# Choosing among the AWS media services
<a name="dss-choose-service"></a>

If your preferred downstream system is another AWS media service, following are some useful tips for choosing the service to use: 
+ If you need to choose between AWS Elemental MediaPackage or AWS Elemental MediaStore for HLS outputs, follow these guidelines: 
  + Decide if you want to protect your content with a digital rights management (DRM) solution. DRM prevents unauthorized people from accessing the content. 
  + Decide if you want to insert ads in your content. 

  If you want either or both of these features, you should choose MediaPackage as the origin service because you will need to repackage the output. 

  If you do not want any of these features, you could choose MediaPackage or AWS Elemental MediaStore. AWS Elemental MediaStore is generally a simpler solution as an origin service, but it lacks the repackaging features of MediaPackage. 
+ If you have identified AWS Elemental MediaPackage as an origin service, decide if you will produce the HLS output using an HLS output group or a MediaPackage output group. For guidelines on making this choice, see the [next section](hls-choosing-hls-vs-emp.md).

# Choosing between the HLS output group and MediaPackage output group
<a name="hls-choosing-hls-vs-emp"></a>

If you want to deliver HLS output to AWS Elemental MediaPackage, you must decide if you want to create an HLS output group or a MediaPackage output group. 

## Delivering to MediaPackage v2
<a name="hls-choose-empv2"></a>

If you are delivering to a MediaPackage channel that uses MediaPackage v2, you must create an HLS output group. The MediaPackage operator can tell you if the channel uses version 2 of the API. One use case for using version 2 is to implement a glass-to-glass low latency workflow that includes both MediaLive and MediaPackage.

## Delivering to standard MediaPackage (v1)
<a name="hls-choose-emp"></a>

There are differences in the setup of each type of output group:
+ The MediaPackage output requires less setup. AWS Elemental MediaLive is already set up with most of the information that it needs to package and deliver the output to the AWS Elemental MediaPackage channel that you specify. This easier setup has benefits, but it also has drawbacks because you can't control some configuration. For information about how MediaLive sets up a MediaPackage output group, see [Result of this procedure](mediapackage-create-result.md).
+ For a MediaPackage output, the MediaLive channel and the AWS Elemental MediaPackage channel must be in the same AWS Region.
+ In a MediaPackage output, there are some restrictions on setting up ID3 metadata. For details, see [Working with ID3 metadata](id3-metadata.md). 

# Options for handling Microsoft Smooth output
<a name="downstream-system-for-mss"></a>

If you are delivering to a Microsoft Smooth Streaming server, the setup depends on whether you want to protect your content with a digital rights management (DRM) solution. DRM prevents unauthorized people from accessing the content. 
+ If you don't want to implement DRM, then create a Microsoft Smooth output group. 
+ If you do want to implement DRM, you can create an HLS or MediaPackage output group to send the output to AWS Elemental MediaPackage, then use AWS Elemental MediaPackage to add DRM. You will then set up AWS Elemental MediaPackage to deliver to the Microsoft Smooth origin server.

# Identify the encode requirements for the output groups
<a name="identify-dss-video-audio"></a>

After you have identified the output groups that you need to create, you must identify the requirements for the video and audio encodes that you will include in each output group. The downstream system controls these requirements.

Perform this work with the downstream system before you assess the [upstream system](evaluate-upstream-system.md). Decision making in a workflow starts with the downstream system, then works back to the upstream system.

**To identify the video and audio codecs in each output group**

Perform this procedure on every output group that you identified.

1. Obtain the following video information from your downstream system:
   + The video codec or codecs that they support. 
   + The maximum bitrate and maximum resolution that they can support.

1. Obtain the following audio information from your downstream system:
   + The supported audio codec or codecs.
   + The supported audio coding modes (for example, 2.0) in each codec.
   + The maximum supported bitrate for audio.
   + For an HLS or Microsoft Smooth output format, whether the downstream system requires that the audio is bundled in with the video or that each audio appears in its own rendition. You will need this information when you organize the assets in the MediaLive outputs.

1. Obtain the following captions information from your downstream system.
   + The captions formats that they support.

1. Verify the video. Compare the video codecs that your downstream system requires to the video codecs that MediaLive supports for this output group. See the tables in [Supported codecs by output type](outputs-supported-codecs.md). Make sure that at least one of the downstream system's offered codecs is supported. 

1. Verify the audio. Compare the audio codecs that your downstream system requires to the video codecs that MediaLive supports for this output group. See the tables in [Supported codecs by output type](outputs-supported-codecs.md). Make sure that at least one of the downstream system's offered codecs is supported. 

1. Skip assessment of the caption formats for now. You will assess those requirements in [a later section](assess-uss-captions.md).

1. Make a note of the video codecs and audio codecs that you can produce for each output group.

1. Decide whether you want to implement a trick-play track. For more information, see [Implementing a trick-play track](trick-play-solutions.md).

**Result of this step**

After you have performed this procedure, you will know what output groups you will create, and you will know which video and audio codecs those output groups can support. Therefore, you should have output information that looks like this example.


**Example**  

|  Output group   |  Downstream system  |  Video codecs supported by downstream system  | Audio codecs supported by downstream system | 
| --- | --- | --- | --- | 
|  HLS  |  MediaPackage  |  AVC  | AAC 2.0, Dolby Digital Plus | 
| RTMP | social media site | AVC | AAC 2.0 | 
| Archive | Amazon S3 | The downstream system doesn't dictate the codec—you choose the codec that you want. | The downstream system doesn't dictate the codec—you choose the codec that you want. | 

# Identify resiliency requirements
<a name="plan-redundancy"></a>

Resiliency is the ability of the channel to continue to work when problems occur. MediaLive includes two resiliency features that you must plan for now. You must decide which of these features you want to implement. You must make this decision now because these features affect how many sources you need for your content, which requires discussion with your upstream system.

## Pipeline redundancy
<a name="decide-resil-pipeline"></a>

You can usually set up a channel with two pipelines, to provide resiliency within the channel processing pipeline. For information about the requirements for setting up two pipelines, see 

Pipeline redundancy is a feature that applies to the entire channel and to all the inputs attached to the channel. Early on in your planning of the channel, you must decide how you want to set up the pipelines. 

You set up for pipeline redundancy by setting up the channel as a *standard channel* so that it has two encoding pipelines. Both pipelines ingest the source content and produce output. If the current pipeline fails, the downstream system can detect that it is no longer receiving content and can switch to the other output. There is no disruption to the downstream system. MediaLive restarts the second pipeline within a few minutes.

For more information about pipeline redundancy, see [Implementing pipeline redundancy](plan-redundancy-mode.md).

## Automatic input failover
<a name="decide-resil-aif"></a>

With some inputs, you can set up two inputs as an automatic input failover *pair*, in order to provide resiliency for one input in the channel.

Automatic input failover is a feature that applies to individual inputs. You don't have to make a decision about implementing automatic input failover when planning the channel. You can implement it later on, when attaching a new input, or when you want to upgrade an existing input so that it implements automatic input failover. 

To set up for automatic input failover, you set up two inputs (that have the exact same source content) as an *input failover pair*. Setting up this way provides resiliency in case of a failure in the upstream system, or between the upstream system and the channel. 

In the input pair, one of the inputs is the *active * input and one is on *standby*. MediaLive ingests both inputs, in order to always be ready to switch, but it usually discards the standby input immediately. If the active input fails, MediaLive immediately fails over and starts processing from the standby input, instead of discarding it.

You can implement automatic input failover in a channel that is set up for pipeline redundancy (a standard channel) or one that has no pipeline redundancy (a single-pipeline channel). 

For more information about automatic input failover, see [Implementing automatic input failover](automatic-input-failover.md).

## Comparison of the two features
<a name="resil-compare-features"></a>

Following is a comparison of pipeline redundancy and automatic input failover.
+ There is a difference in the failure that each feature deals with:

  Pipeline redundancy provides resiliency in case of a failure in the MediaLive encoder pipeline.

  Automatic input failover provides resiliency in case of a failure ahead of MediaLive, either in the upstream system or in the network connection between the upstream system and the MediaLive input.
+ Both features require two instances of the content source, so in both cases your upstream system must be able to provide two instances. 

  With pipeline redundancy, the two sources can originate from the same encoder. 

  With automatic input failover, the sources must originate from different encoders, otherwise both sources will fail at the same time, and the input failover switch will fail.
+ Pipeline redundancy applies to the entire channel. Therefore you should decide whether you want to implement it when you plan the channel. Automatic input failover applies only to specific input types. Therefore you could, for example, decide to implement automatic input failover only when you attach your most important input.
+ Automatic input failover requires that the downstream system be able to handle two instances of the output and be able to switch from one (when it fails) to the other. MediaPackage, for example, can handle two instances.

  If your downstream system doesn't have this logic built in, then you can't implement automatic input failover.

# Assess the upstream system
<a name="evaluate-upstream-system"></a>

As part of the planning of the MediaLive workflow, you must assess the upstream system that is the source of the content, to ensure that it is compatible with MediaLive. Then you must assess the source content to ensure that it contains formats that MediaLive can ingest and that MediaLive can include in the outputs you want. 

You obtain the *source content* from a *content provider*. The source content is provided to you from an *upstream system* that the content provider controls. Typically, you have already identified the content provider. For more information about source content and upstream systems, see [How MediaLive works](how-medialive-works-channels.md).

**To assess the upstream system**

1. Speak to the content provider to obtain information about the upstream system. You use this information to assess the ability of MediaLive to connect to the upstream system, and to assess the ability of MediaLive to use the source content from that upstream system.

   For details about the information to obtain and assess, see the following sections:
   + [Assess source formats and packaging](uss-obtain-info.md)
   + [Assess video content](assess-uss-source.md)
   + [Assess audio content](assess-uss-audio.md)
   + [Assess captions](assess-uss-captions.md)

1. Make a note of the MediaLive input type that you identify for the source content.

1. Make a note of the following three characteristics of the source stream. You will need this information [when you set up the channel](input-specification.md):
   + The video codec
   + The resolution of the video—SD, HD, or UHD
   + The maximum input bitrate 

**Result of this step**

At the end of this step, you will be confident that MediaLive can ingest the content. In addition you will have identified the following:
+ The type of MediaLive input you will create to ingest the source content.
+ The information that you need to extract the video, audio, and captions from the source (from the MediaLive input). For example:

[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/medialive/latest/ug/evaluate-upstream-system.html)

# Assess source formats and packaging
<a name="uss-obtain-info"></a>

Consult the following table for information about how to assess the source formats and packaging. Read across each row.


****  

| Information to obtain | Verify the following | 
| --- | --- | 
| Number of sources that the content provider can provide. | If you plan to implement a [resiliency feature](plan-redundancy.md), make sure that your content provider can deliver the required inputs:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/medialive/latest/ug/uss-obtain-info.html) | 
| Delivery formats and protocolsThe type of MediaLive input that applies to the format you identify | Find out what format and protocol the upstream system supports for delivery. Make sure that this format is listed in the table in [Input types, protocols, and upstream systems](inputs-supported-formats.md). [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/medialive/latest/ug/uss-obtain-info.html)Note that you don't need to verify this information for content delivered over CDI or content delivered from an AWS Elemental Link. MediaLive can always handle these input types. | 
| Whether the upstream system is using the latest SDK | Make sure that the content provider is using the latest version of the [AWS CDI SDK](https://aws.amazon.com/media-services/resources/cdi/) on their upstream CDI source device. | 
| Whether the source content is a stream or VOD asset | Find out if the source content is a live stream or a VOD asset.Make sure that MediaLive supports the delivery for the format that you identified. See the table in [Support for live and file sources](inputs-live-vs-file.md).  | 
| Whether the content is encrypted | MediaLive can ingest encrypted content only from HLS content.If the source content is HLS and it is encrypted, make sure that it is encrypted in a format that MediaLive supports. See [Handling encrypted source content in an HLS source](planning-hls-input-encrypted.md). If MediaLive doesn't support the available encryption format, find out if you can obtain the content in unencrypted form. | 
| Only if the source content is RTP, whether it includes FEC. |  We recommend that the source content include FEC because it is less likely to result in an output that has visual disruptions.  | 

# Handling encrypted source content in an HLS source
<a name="planning-hls-input-encrypted"></a>

MediaLive can ingest an HLS source that is encrypted according to the HTTP Live Streaming specification.

**Supported encryption format**

MediaLive supports the following format for encrypted HLS sources:
+ The source content is encrypted with AES-128. MediaLive doesn't support AES-SAMPLE. 
+ The source content is encrypted using either static or rotating keys.
+ The manifest includes the `#EXT-X-KEY `tag with these attributes:
  + The `METHOD` attribute specifies AES-128.
  + The URI specifies the license server for the encryption key.
  + The IV is blank or specifies the initialization vector (IV) to use. If the IV is blank, MediaLive uses the value in the `#EXT-X-MEDIA-SEQUENCE` tag as the IV.
+ If both the upstream system and the license server require authentication credentials (user name and password), make sure that the same credentials are used on both servers. MediaLive does not support having different credentials for these two servers.

** How decryption works**

The content owner sets up the main manifest to include the `#EXT-X-KEY` with the method (AES-128), the URL to the license server, and the initialization vector (IV). The content owner places the encryption keys on the license server. When the MediaLive channel that uses this source starts, MediaLive obtains the main manifest and reads the `#EXT-X-KEY `tag for the URL of the license server. 

MediaLive connects to the license server and obtains the encryption key. MediaLive starts pulling the content from the upstream system, and decrypts the content using the encryption key and the IV. 

# Assess video content
<a name="assess-uss-source"></a>

Consult the following table for information about how to assess video source. Read across each row.

**Note**  
You don't need to perform any assessment of the video being delivered over CDI or from an AWS Elemental Link device. These sources are always acceptable to MediaLive.


****  

| Information to obtain | Verify the following | 
| --- | --- | 
| The available video codecs or formats. | Make sure that at least one of the video codecs is included in the list of video codecs for the package format. See [Supported codecs by input type](inputs-supported-codecs-by-input-type.md). If the content is available in more than one supported codec, decide which single video codec you want to use. You can extract only one video asset from the source content. | 
| The maximum expected bitrate. | Make sure that the bandwidth between the upstream system and MediaLive is sufficient to handle the anticipated maximum bitrate of the source content.If you are setting up standard channels (to implement [pipeline redundancy](plan-redundancy.md)), make sure that the bandwidth is double the anticipated maximum bitrate because there are two pipelines. | 
| Whether the video characteristics change in the middle of the stream.  | For best results, verify that the video characteristics of the video source don't change in the middle of the stream. For example, the codec should not change. The frame rate should not change. | 

# Assess audio content
<a name="assess-uss-audio"></a>

Consult the following table for information about how to assess the audio source. Read across each row.

**Note**  
You don't need to perform any assessment of the audio being delivered over CDI or from an AWS Elemental Link device. These sources are always acceptable to MediaLive.


****  

| Information to obtain | Verify the following | 
| --- | --- | 
| The available audio codecs or formats. | Make sure that at least one of the audio codecs is included in the list of audio codecs in [Supported codecs by input type](inputs-supported-codecs-by-input-type.md).  | 
| The available languages for each codec. For example, English, French. | Identify the languages that you would like to offer. Determine which of these languages the content provider can provide.  | 
| The available coding modes (for example, 2.0 and 5.1) for each codec. |  Identify the audio coding modes that you prefer for each audio language. Determine which of these coding modes the content provider can provide. For more information, see the [section after this table](#coding).   | 
| Whether the audio characteristics change in the middle of the stream.  |  For best results, verify that the audio characteristics of the source content don't change in the middle of the stream. For example, the codec of the source should not change. The coding mode should not change. A language should not disappear.  | 
| If the source content is HLS, whether the audio assets are in an audio rendition group or multiplexed with video.  |  MediaLive can ingest audio assets that are in a separate rendition group or multiplexed into a single stream with the video.  | 

**To decide on a coding mode**  
If multiple coding modes are available for the same language, decide which mode you want to use. Follow these guidelines:
+ You can extract some languages in one codec and coding mode, and other languages in another codec and coding mode. For example, you might want one or two languages available in 5.1 coding mode, and want other languages in 2.0 coding mode. 
+ You can extract the same language more than once. For example, you might want one language in both 5.1 coding mode and coding mode 2.0.
+ When deciding which codec and coding mode to extract for a given language, consider the coding mode you want for that language in the output. For each language, it is always easiest if the coding mode of the source content matches the coding mode of the output, because then you don't have to remix the audio in order to convert the coding mode. MediaLive supports remix, but remixing is an advanced feature that requires a good understanding of audio.

For example, in the output, you might want the one language to be in coding mode 5.1. You might want other languages to be available in coding mode 2.0.

Therefore you might choose to extract the following:
+ Spanish in Dolby Digital 5.1
+ French and English in AAC 2.0.

# Assess captions
<a name="assess-uss-captions"></a>

If you plan to include captions in an output group, you must determine if MediaLive can use the captions format in the source to produce the captions format that you want in the output. 

Obtain the following information about the captions source.


****  
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/medialive/latest/ug/assess-uss-captions.html)

**To assess the captions requirements**

Follow these steps for each [output group that you identified](identify-downstream-system.md) for your workflow.

1. Go to [Captions supported in MediaLive](supported-captions.md) and find the section for the output group. For example, find [Captions formats supported in HLS or MediaPackage outputs](supported-formats-hls-output.md). In the table in that section, read down the first column to find the format (container) that the content provider is providing. 

1. Read across to the *Source caption input* column to find the caption formats that MediaLive supports in that source format.

1. Then read across to the *Supported output captions* column to find the caption formats that MediaLive can convert the source format to.

   You end up with a statement such as: "If you want to produce an HLS output and your source content is RTMP, you can convert embedded captions to burn-in, embedded, or WebVTT".

1. Verify that the source content from the content provider matches one of the formats in the *Supported caption input* column of the table. For example, verify that the source content contains embedded captions.

1. Find the list of captions formats that the downstream system supports. You obtained this list when you [identified the encode requirements for the output groups that you identified](identify-dss-video-audio.md). Verify that at least one of these output formats appears in the *Supported output captions* column of the table.

   If there is no match in the source content, or no match in the output, then you can't include captions in the output.

For example, assume that you need to produce an HLS output group. Assume that your content provider can give you content in RTP format with embedded captions. Assume that the downstream system requires that for HLS output, the output must include WebVTT captions.

Following the steps above, you read the table for HLS outputs. In the container column of the table, you find the row for RTP format. You read across to the source column and identify that embedded captions are a supported source format. You then read across to the output column and find that embedded captions can be converted to burn-in, embedded, or WebVTT captions. WebVTT captions is the format that the downstream system requires. Therefore, you conclude that you can include captions in the HLS output.

# Collect information about the source content
<a name="planning-content-extract"></a>

After you have assessed the source content and have identified suitable video, audio, and captions assets in that content, you must obtain information about those assets. The information you need is different for each type of source. 

You don't need this information to [create the input](medialive-inputs.md) in MediaLive. But you will need this information when you [attach the input](creating-a-channel-step2.md) to the channel in MediaLive.

**Result of this step**  
After you have performed the procedures in this step, you should have source content information that looks like this example.


**Example**  
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/medialive/latest/ug/planning-content-extract.html)

**Topics**
+ [Identifying content in a CDI source](extract-contents-cdi.md)
+ [Identifying content in an AWS Elemental Link source](extract-contents-link.md)
+ [Identifying content in an HLS source](extract-contents-hls.md)
+ [Identifying content in a MediaConnect source](extract-content-emx.md)
+ [Identifying content in an MP4 source](extract-contents-mp4.md)
+ [Identifying content in an RTMP source](extract-contents-rtmp.md)
+ [Identifying content in an RTP source](extract-contents-rtp.md)
+ [Identifying content in a SMPTE 2110 source](extract-contents-s2110.md)
+ [Identifying content in an SRT source](extract-contents-srt.md)

# Identifying content in a CDI source
<a name="extract-contents-cdi"></a>

The content in a CDI source always consists of uncompressed video, uncompressed audio, and captions. 

Obtain identifying information from the content provider.


****  
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/medialive/latest/ug/extract-contents-cdi.html)

# Identifying content in an AWS Elemental Link source
<a name="extract-contents-link"></a>

The content in an AWS Elemental Link source is always a transport stream (TS) that contains one video asset, one audio pair, and optional captions. 

Obtain identifying information from the content provider.


****  
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/medialive/latest/ug/extract-contents-link.html)

Also obtain the following information about the content:
+ The maximum bitrate. You will have the option to throttle this bitrate when you set up the device in MediaLive. For more information, see [Setting up AWS Elemental Link](setup-devices.md). 
+ Whether the content includes an embedded timecode. If it does, you can choose to use that timecode. For more information, see [Timecode configuration](https://docs.aws.amazon.com/medialive/latest/ug/timecode.html)[Working with timecodes and timestamps](timecode.md). 
+ Whether the content includes ad avail messages (SCTE-104 messages that MediaLive will automatically convert to SCTE-35 messages). For more information about ad avail messages, see [Processing SCTE 35 messages](scte-35-message-processing.md).

# Identifying content in an HLS source
<a name="extract-contents-hls"></a>

The content in an HLS container is always a transport stream (TS) that contains only one video rendition (program). 

Obtain identifying information from the content provider.


****  

|  Asset  |  Details  | Information to obtain | 
| --- | --- | --- | 
| Video | You don't need identifying information. MediaLive always extracts the single video asset. |  | 
| Audio | The source might include multiple audio PIDs. | Obtain the PIDs or three-character language codes of the languages that you want. We recommend that you obtain the PIDs for the audio assets. They are a more reliable way of identifying an audio asset.  | 
| Captions | Embedded | Obtain the languages in the channel numbers. For example, "channel 1 is French" | 

# Identifying content in a MediaConnect source
<a name="extract-content-emx"></a>

The content in an AWS Elemental MediaConnect source is always a transport stream (TS). The TS is made up of one program (SPTS) or multiple programs (MPTS). Each program contains a combination of video, audio, and optional captions.

Obtain identifying information from the content provider.


****  
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/medialive/latest/ug/extract-content-emx.html)

# Identifying content in an MP4 source
<a name="extract-contents-mp4"></a>

The content in an MP4 source always consists of one video track, one or more audio tracks, and optional captions. 

Obtain identifying information from the content provider.


****  

|  Asset  |  Details  | Information to obtain | 
| --- | --- | --- | 
| Video | You don't need identifying information. MediaLive always extracts the single video asset. | None | 
| Audio | The source might include multiple audio tracks, typically, one for each language.  | Obtain the track numbers or three-character language codes of the languages that you want. | 
| Captions | EmbeddedThe captions might be embedded in the video track or might be embedded in an ancillary track. | Obtain the languages in the channel numbers. For example, "channel 1 is French".  | 

# Identifying content in an RTMP source
<a name="extract-contents-rtmp"></a>

This procedure applies to both RTMP push and pull inputs from the internet, and to RTMP inputs from Amazon Virtual Private Cloud. The content in an RTMP input always consists of one video, one audio, and optional captions. 

Obtain identifying information from the content provider.


****  

|  Asset  |  Details  | Information to obtain | 
| --- | --- | --- | 
| Video | You don't need identifying information. MediaLive always extracts the single video asset. | None | 
| Audio | You don't need identifying information. MediaLive always extracts the single audio asset | Obtain the numbers and languages of the tracks. For example, "track 1 is French".  | 
| Captions | EmbeddedThe captions might be embedded in the video track or might be embedded in an ancillary track. | Obtain the languages in the channel numbers. For example, "channel 1 is French".  | 

# Identifying content in an RTP source
<a name="extract-contents-rtp"></a>

This procedure applies to both RTP inputs from the internet and inputs from Amazon Virtual Private Cloud. The content in an RTP input is always a transport stream (TS). The TS is made up of one program (SPTS) or multiple programs (MPTS). Each program contains a combination of video, a combination of audio, and optional captions. 

Obtain identifying information from the content provider.


****  
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/medialive/latest/ug/extract-contents-rtp.html)

# Identifying content in a SMPTE 2110 source
<a name="extract-contents-s2110"></a>

The content in a SMPTE 2110 source is always a set of streams consisting of one video asset, zero or more audio assets, and zero or more captions (ancillary data) assets. Each asset is in its own stream. 

Obtain identifying information from the content provider.


****  
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/medialive/latest/ug/extract-contents-s2110.html)

# Identifying content in an SRT source
<a name="extract-contents-srt"></a>

The content in an SRT input is always a transport stream (TS). The TS is made up of one program (SPTS) or multiple programs (MPTS). Each program contains a combination of video, a combination of audio, and optional captions. 

Obtain identifying information from the content provider.


****  
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/medialive/latest/ug/extract-contents-srt.html)

# Coordinate with downstream systems
<a name="setting-up-downstream-system"></a>

As the final step in preparing the downstream and upstream systems in your workflow, you must speak to the operator of the downstream system and coordinate information.

The *output* from MediaLive is considered *input* to the downstream system.

The setup is different for each type of output group and downstream system. For more information, see [Setup: Creating output groups and outputs](medialive-outputs.md), and go to the section for the type of output group that you are creating. Read the information about coordinating with the downstream system. 