

# Assess the upstream system
Step 4: Assess the upstream system

As part of the planning of the MediaLive workflow, you must assess the upstream system that is the source of the content, to ensure that it is compatible with MediaLive. Then you must assess the source content to ensure that it contains formats that MediaLive can ingest and that MediaLive can include in the outputs you want. 

You obtain the *source content* from a *content provider*. The source content is provided to you from an *upstream system* that the content provider controls. Typically, you have already identified the content provider. For more information about source content and upstream systems, see [How MediaLive works](how-medialive-works-channels.md).

**To assess the upstream system**

1. Speak to the content provider to obtain information about the upstream system. You use this information to assess the ability of MediaLive to connect to the upstream system, and to assess the ability of MediaLive to use the source content from that upstream system.

   For details about the information to obtain and assess, see the following sections:
   + [Assess source formats and packaging](uss-obtain-info.md)
   + [Assess video content](assess-uss-source.md)
   + [Assess audio content](assess-uss-audio.md)
   + [Assess captions](assess-uss-captions.md)

1. Make a note of the MediaLive input type that you identify for the source content.

1. Make a note of the following three characteristics of the source stream. You will need this information [when you set up the channel](input-specification.md):
   + The video codec
   + The resolution of the video—SD, HD, or UHD
   + The maximum input bitrate 

**Result of this step**

At the end of this step, you will be confident that MediaLive can ingest the content. In addition you will have identified the following:
+ The type of MediaLive input you will create to ingest the source content.
+ The information that you need to extract the video, audio, and captions from the source (from the MediaLive input). For example:

[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/medialive/latest/ug/evaluate-upstream-system.html)

# Assess source formats and packaging


Consult the following table for information about how to assess the source formats and packaging. Read across each row.


****  

| Information to obtain | Verify the following | 
| --- | --- | 
| Number of sources that the content provider can provide. | If you plan to implement a [resiliency feature](plan-redundancy.md), make sure that your content provider can deliver the required inputs:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/medialive/latest/ug/uss-obtain-info.html) | 
| Delivery formats and protocolsThe type of MediaLive input that applies to the format you identify | Find out what format and protocol the upstream system supports for delivery. Make sure that this format is listed in the table in [Input types, protocols, and upstream systems](inputs-supported-formats.md). [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/medialive/latest/ug/uss-obtain-info.html)Note that you don't need to verify this information for content delivered over CDI or content delivered from an AWS Elemental Link. MediaLive can always handle these input types. | 
| Whether the upstream system is using the latest SDK | Make sure that the content provider is using the latest version of the [AWS CDI SDK](https://aws.amazon.com/media-services/resources/cdi/) on their upstream CDI source device. | 
| Whether the source content is a stream or VOD asset | Find out if the source content is a live stream or a VOD asset.Make sure that MediaLive supports the delivery for the format that you identified. See the table in [Support for live and file sources](inputs-live-vs-file.md).  | 
| Whether the content is encrypted | MediaLive can ingest encrypted content only from HLS content.If the source content is HLS and it is encrypted, make sure that it is encrypted in a format that MediaLive supports. See [Handling encrypted source content in an HLS source](planning-hls-input-encrypted.md). If MediaLive doesn't support the available encryption format, find out if you can obtain the content in unencrypted form. | 
| Only if the source content is RTP, whether it includes FEC. |  We recommend that the source content include FEC because it is less likely to result in an output that has visual disruptions.  | 

# Handling encrypted source content in an HLS source
Encrypted HLS content

MediaLive can ingest an HLS source that is encrypted according to the HTTP Live Streaming specification.

**Supported encryption format**

MediaLive supports the following format for encrypted HLS sources:
+ The source content is encrypted with AES-128. MediaLive doesn't support AES-SAMPLE. 
+ The source content is encrypted using either static or rotating keys.
+ The manifest includes the `#EXT-X-KEY `tag with these attributes:
  + The `METHOD` attribute specifies AES-128.
  + The URI specifies the license server for the encryption key.
  + The IV is blank or specifies the initialization vector (IV) to use. If the IV is blank, MediaLive uses the value in the `#EXT-X-MEDIA-SEQUENCE` tag as the IV.
+ If both the upstream system and the license server require authentication credentials (user name and password), make sure that the same credentials are used on both servers. MediaLive does not support having different credentials for these two servers.

** How decryption works**

The content owner sets up the main manifest to include the `#EXT-X-KEY` with the method (AES-128), the URL to the license server, and the initialization vector (IV). The content owner places the encryption keys on the license server. When the MediaLive channel that uses this source starts, MediaLive obtains the main manifest and reads the `#EXT-X-KEY `tag for the URL of the license server. 

MediaLive connects to the license server and obtains the encryption key. MediaLive starts pulling the content from the upstream system, and decrypts the content using the encryption key and the IV. 

# Assess video content


Consult the following table for information about how to assess video source. Read across each row.

**Note**  
You don't need to perform any assessment of the video being delivered over CDI or from an AWS Elemental Link device. These sources are always acceptable to MediaLive.


****  

| Information to obtain | Verify the following | 
| --- | --- | 
| The available video codecs or formats. | Make sure that at least one of the video codecs is included in the list of video codecs for the package format. See [Supported codecs by input type](inputs-supported-codecs-by-input-type.md). If the content is available in more than one supported codec, decide which single video codec you want to use. You can extract only one video asset from the source content. | 
| The maximum expected bitrate. | Make sure that the bandwidth between the upstream system and MediaLive is sufficient to handle the anticipated maximum bitrate of the source content.If you are setting up standard channels (to implement [pipeline redundancy](plan-redundancy.md)), make sure that the bandwidth is double the anticipated maximum bitrate because there are two pipelines. | 
| Whether the video characteristics change in the middle of the stream.  | For best results, verify that the video characteristics of the video source don't change in the middle of the stream. For example, the codec should not change. The frame rate should not change. | 

# Assess audio content


Consult the following table for information about how to assess the audio source. Read across each row.

**Note**  
You don't need to perform any assessment of the audio being delivered over CDI or from an AWS Elemental Link device. These sources are always acceptable to MediaLive.


****  

| Information to obtain | Verify the following | 
| --- | --- | 
| The available audio codecs or formats. | Make sure that at least one of the audio codecs is included in the list of audio codecs in [Supported codecs by input type](inputs-supported-codecs-by-input-type.md).  | 
| The available languages for each codec. For example, English, French. | Identify the languages that you would like to offer. Determine which of these languages the content provider can provide.  | 
| The available coding modes (for example, 2.0 and 5.1) for each codec. |  Identify the audio coding modes that you prefer for each audio language. Determine which of these coding modes the content provider can provide. For more information, see the [section after this table](#coding).   | 
| Whether the audio characteristics change in the middle of the stream.  |  For best results, verify that the audio characteristics of the source content don't change in the middle of the stream. For example, the codec of the source should not change. The coding mode should not change. A language should not disappear.  | 
| If the source content is HLS, whether the audio assets are in an audio rendition group or multiplexed with video.  |  MediaLive can ingest audio assets that are in a separate rendition group or multiplexed into a single stream with the video.  | 

**To decide on a coding mode**  
If multiple coding modes are available for the same language, decide which mode you want to use. Follow these guidelines:
+ You can extract some languages in one codec and coding mode, and other languages in another codec and coding mode. For example, you might want one or two languages available in 5.1 coding mode, and want other languages in 2.0 coding mode. 
+ You can extract the same language more than once. For example, you might want one language in both 5.1 coding mode and coding mode 2.0.
+ When deciding which codec and coding mode to extract for a given language, consider the coding mode you want for that language in the output. For each language, it is always easiest if the coding mode of the source content matches the coding mode of the output, because then you don't have to remix the audio in order to convert the coding mode. MediaLive supports remix, but remixing is an advanced feature that requires a good understanding of audio.

For example, in the output, you might want the one language to be in coding mode 5.1. You might want other languages to be available in coding mode 2.0.

Therefore you might choose to extract the following:
+ Spanish in Dolby Digital 5.1
+ French and English in AAC 2.0.

# Assess captions


If you plan to include captions in an output group, you must determine if MediaLive can use the captions format in the source to produce the captions format that you want in the output. 

Obtain the following information about the captions source.


****  
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/medialive/latest/ug/assess-uss-captions.html)

**To assess the captions requirements**

Follow these steps for each [output group that you identified](identify-downstream-system.md) for your workflow.

1. Go to [Captions supported in MediaLive](supported-captions.md) and find the section for the output group. For example, find [Captions formats supported in HLS or MediaPackage outputs](supported-formats-hls-output.md). In the table in that section, read down the first column to find the format (container) that the content provider is providing. 

1. Read across to the *Source caption input* column to find the caption formats that MediaLive supports in that source format.

1. Then read across to the *Supported output captions* column to find the caption formats that MediaLive can convert the source format to.

   You end up with a statement such as: "If you want to produce an HLS output and your source content is RTMP, you can convert embedded captions to burn-in, embedded, or WebVTT".

1. Verify that the source content from the content provider matches one of the formats in the *Supported caption input* column of the table. For example, verify that the source content contains embedded captions.

1. Find the list of captions formats that the downstream system supports. You obtained this list when you [identified the encode requirements for the output groups that you identified](identify-dss-video-audio.md). Verify that at least one of these output formats appears in the *Supported output captions* column of the table.

   If there is no match in the source content, or no match in the output, then you can't include captions in the output.

For example, assume that you need to produce an HLS output group. Assume that your content provider can give you content in RTP format with embedded captions. Assume that the downstream system requires that for HLS output, the output must include WebVTT captions.

Following the steps above, you read the table for HLS outputs. In the container column of the table, you find the row for RTP format. You read across to the source column and identify that embedded captions are a supported source format. You then read across to the output column and find that embedded captions can be converted to burn-in, embedded, or WebVTT captions. WebVTT captions is the format that the downstream system requires. Therefore, you conclude that you can include captions in the HLS output.