

# Coordinate with the downstream system
<a name="hls-opg-coordinate-dss"></a>

The HLS output group in AWS Elemental MediaLive supports several types of downstream systems. Read the information that applies to the system you are working with.

**Topics**
+ [HLS output group to Amazon S3](origin-server-hls-s3.md)
+ [HLS output group to MediaStore](origin-server-ems.md)
+ [HLS output group to MediaPackage](origin-server-hls-emp.md)
+ [HLS output group to MediaPackage v2](origin-server-hls-empv2.md)
+ [HLS output group to HTTP](origin-server-http.md)

# HLS output group to Amazon S3
<a name="origin-server-hls-s3"></a>

Follow this procedure if you [determined](identify-downstream-system.md) that you will create an HLS output group with Amazon S3 as the destination. You and the operator of the downstream system must agree about the destination for the output of the HLS output group. 

**To arrange setup of the destination**

1. Decide if you need two destinations for the output: 
   + You need two destinations in a [standard channel](plan-redundancy.md).
   + You need one destination in a single-pipeline channel.

1. We recommend that you design the full path of the destination — the Amazon S3 bucket and all the folders. See [Design the path for the output destination](hls-destinations-design-step.md).

1. Ask the Amazon S3 user to create any buckets that don't already exist. 

   With MediaLive, the Amazon S3 bucket name must not use dot notation, which means it mustn't use . (dot) between the words in the bucket name. 

1. Discuss ownership with the Amazon S3 user. If the bucket belongs to another AWS account, you typically want that account to become the owner of the output. For more information, see [Controlling access to the output](#setting-dss-hls-canned-acl), after this procedure.

Note that you don't need user credentials to send to an S3 bucket. MediaLive has permission to write to the S3 bucket via the trusted entity. Someone in your organization should have already set up these permissions. For more information, see [Access requirements for the trusted entity](trusted-entity-requirements.md).

## Controlling access to the output
<a name="setting-dss-hls-canned-acl"></a>

You might be sending output files to an Amazon S3 bucket that is owned by another AWS account. In this situation, you typically want the other account to become the owner of the output files (the object being put in the bucket). If the bucket owner doesn't become the object owner, you (MediaLive) will be the only agent that can delete the files when the files are no longer required.

It is therefore in everyone's interest to transfer ownership of the output files after they are in the Amazon S3 bucket.

To transfer object ownership, the following setup is required:
+ The bucket owner must add a bucket permissions policy that grants you permission to add an Amazon S3 canned access control list (ACL) when MediaLive delivers the output files to the bucket. The bucket owner should read the information in [Managing access with ACLs](https://docs.aws.amazon.com/AmazonS3/latest/userguide/acls) in the Amazon Simple Storage Service user guide. The bucket owner must set up ACL permissions for the bucket, not for the objects.
+ The bucket owner should also set up object ownership. This feature effectively makes it mandatory (rather than optional) for the sender (MediaLive) to include the *Bucket owner full control* ACL. The bucket owner should read the information in [Controlling object ownership](https://docs.aws.amazon.com/AmazonS3/latest/userguide/about-object-ownership) in the Amazon Simple Storage Service user guide.

  If the bucket owner implements this feature, then you must set up MediaLive to include the ACL. If you don't, delivery to the Amazon S3 bucket will fail.
+ You must set up MediaLive to include the *Bucket owner full control** *ACL when it delivers to the bucket. You will perform this setup when you [create the channel](hls-destinations-s3-specify.md).

The S3 canned ACL feature supports ACLs other than *Bucket owner full control*, but those other ACLs are typicallly not applicable to the use case of delivering video from MediaLive.

# HLS output group to MediaStore
<a name="origin-server-ems"></a>

Follow this procedure if you [determined](identify-downstream-system.md) that you will create an HLS output group, with AWS Elemental MediaStore as the destination. You and the operator of the downstream system must agree about the destination for the output of the HLS output group

**To arrange setup of the destination**

1. Decide if you need two destinations for the output: 
   + You need two destinations in a [standard channel](plan-redundancy.md).
   + You need one destination in a single-pipeline channel.

1. We recommend that you design the full path of the destination. See [Design the path for the output destination](hls-destinations-design-step.md).

   If you have two destinations, the destination paths must be different from each other in some way. At least one of the portions of one path must be different from the other. It is acceptable for all the portions to be different. 

1. Ask the MediaStore user to create any containers that don't already exist. 

1. Obtain the data endpoint for the container or containers. For example: 

   `https://a23f.data.mediastore.us-west-2.amazonaws.com`

   `https://fe30.data.mediastore.us-west-2.amazonaws.com`

   You need the data endpoints. You don't need the container name.

Note that you don't need user credentials to send to MediaStore containers. MediaLive has permission to write to the MediaStore container via the trusted entity. Someone in your organization should have already set up these permissions. For more information, see [Access requirements for the trusted entity](trusted-entity-requirements.md).

# HLS output group to MediaPackage
<a name="origin-server-hls-emp"></a>

Follow this procedure if you [determined](identify-downstream-system.md) that you will create an HLS output group, and will send to AWS Elemental MediaPackage over HTTPS. You and the operator of the downstream system must agree about the destination for the output of the HLS output group.

**To arrange setup of the destination**

1. Ask the MediaPackage user to create one channel on MediaPackage. Even if the MediaLive channel is a [standard channel](plan-redundancy.md) (with two pipelines), you need only one MediaPackage channel.

1. Arrange with the MediaPackage user to set up HTTPS user credentials. You must send to MediaPackage over a secure connection.

1. Obtain the following information:
   + The two URLs (input endpoints is the MediaPackage terminology) for the channel. The two URLs for a channel look like this:

      `https://6d2c.mediapackage.uswest-2.amazonaws.com/in/v2/9dj8/9dj8/channel`

      `https://6d2c.mediapackage.uswest-2.amazonaws.com/in/v2/9dj8/e333/channel`

     The two URLs are always identical, except for the folder just before `channel`.

     Make sure that you obtain the URLs (which start with `https://`), not the channel name (which starts with `arn`).
   + The user name and password to access the downstream system, if the downstream system requires authenticated requests. Note that these user credentials relate to user authentication, not to the protocol. User authentication is about whether the downstream system will accept your request. The protocol is about whether the request is sent over a secure connection.

# HLS output group to MediaPackage v2
<a name="origin-server-hls-empv2"></a>

Follow this procedure if you [determined](hls-choosing-hls-vs-emp.md) that you will create an HLS output group, and will send to MediaPackage v2. You and the operator of the downstream system must agree about the destination for the output of the HLS output group. 

**To arrange setup of the destination**

1. Ask the MediaPackage user to create one channel on MediaPackage. Even if the MediaLive channel is a [standard channel](plan-redundancy.md) (with two pipelines), you need only one MediaPackage channel.

1. Obtain the two URLs (input endpoints is the MediaPackage terminology) for the channel. The two URLs for a channel look like this:

    `https://mz82o4-1.ingest.hnycui.mediapackagev2.us-west-2.amazonaws.com/in/v1/live-sports/1/curling/index` 

    `https://mz82o4-2.ingest.hnycui.mediapackagev2.us-west-2.amazonaws.com/in/v1/live-sports/2/curling/index`

   The two URLs are slightly different, as shown in the examples above.

   Make sure that you obtain the URLs (which start with `https://`), not the channel name (which starts with `arn`).

   Note that you don't use user credentials in order to send to MediaPackage v2.

# HLS output group to HTTP
<a name="origin-server-http"></a>

Follow this procedure if you [determined](identify-downstream-system.md) that you will create an HLS output group with one of the following downstream systems as the destination:
+ An HTTP or HTTPS PUT server.
+ An HTTP or HTTPS WebDAV server.
+ An Akamai origin server.

You and the operator of the downstream system must agree about the destination for the output of the HLS output group. 

When you deliver HLS over HTTP, you are often delivering to an origin server. The origin server typically has clear guidelines about the rules for the destination path, including the file name of the main manifest (the `.M3U8` file).

**To arrange setup of the destination**

You must talk to the operator at the downstream system to coordinate your setup.

1. If the downstream system isn't an Akamai server, find out if it uses PUT or WebDAV. 

1. Find out if the downstream system has special connection requirements. These connection fields are grouped in the console in the **CDN settings** section for the HLS output group. To display this page on the MediaLive console, in the **Create channel** page, in the **Output groups** section, choose **Add**, then choose **HLS**. Choose the group, then in **HLS settings**, open **CDN settings**.

1. Decide if you need two destinations for the output: 
   + You need two destinations in a [standard channel](plan-redundancy.md).
   + You need one destination in a single-pipeline channel.

1. Find out if the downstream system uses a secure connection. If it does, arrange with the operator to set up user credentials. 

1. Find out if the downstream system requires custom paths inside the main manifests and the child manifests. For more information, see [Customizing the paths inside HLS manifests](hls-manifest-paths.md).

1. If you are setting up a [standard channel](plan-redundancy.md), find out if the downstream system supports redundant manifests. If so, decide if you want to implement this feature. For more information, see [Creating redundant HLS manifests](hls-redundant-manifests.md), and specifically [Rules for most downstream systems](hls-redundant-manif-most-systems.md) and [Rules for Akamai CDNs](hls-redundant-manif-akamai.md) for specific instructions. 

1. Talk to the operator at the downstream system to agree on a full destination path for the three categories of HLS files (the main manifests, the child manifests, and the media files). MediaLive always puts all three categories of files for each destination in this one location. It’s not possible to configure MediaLive to put some files in another location. 

   If you have two destinations, the destination paths must be different from each other in some way. At least one of the portions of one path must be different from the other. It is acceptable for all the portions to be different. Discuss this requirement with the operator of the downstream system. The downstream system might have specific rules about uniqueness.

1. Talk to the operator at the downstream system about special requirements for the names of the three categories of HLS files. Typically, the downstream system doesn’t have special requirements. 

1. Talk to the operator at the downstream system about special requirements for the modifier on the names of the child manifests and media files. 

   The child manifests and media files always include this modifier in their file names. This modifier distinguishes each output from the other, so it must be unique in each output. For example, the files for the high-resolution output must have a different name from the files for the low-resolution output. For example, the files for one output could have the file name and modifier `curling_high`, while the other output could have `curling_low`.

   Typically, the downstream system doesn’t have special requirements.

1. Ask the operator of the downstream system if the media files should be set up in separate subdirectories. For example, one subdirectory for the first 1000 segments, another subdirectory for the second 1000 segments, and so on.

   Most downstream systems don’t require separate subdirectories.

1. Agree on the portions of the destination path where the downstream system has special requirements.
   + For example, the downstream system might only require that you send to a specific host. The downstream system doesn't need to know about the folder or file names you will use.

     For example, send to two folders that you name, but on the host at `https://203.0.113.55`

     Or send to two folders that you name, but on the hosts at `https://203.0.113.55` and `https://203.0.113.82`
   + Or the downstream system might require a specific host and folder, but with a file name that you choose. For example, this host and folders:

     `https://203.0.113.55/sports/delivery/`

     `https://203.0.113.55/sports/backup/`

1. Make a note of the information you have collected:
   + The connection type for the downstream system – Akamai, PUT, or WebDAV.
   + The settings for connection fields, if the downstream system has special requirements.
   + The protocol for delivery—HTTP or HTTPS.
   + The user name and password to access the downstream system, if the downstream system requires authenticated requests. Note that these user credentials relate to user authentication, not to the protocol. User authentication is about whether the downstream system will accept your request. The protocol is about whether the request is sent over a secure connection.
   + All or part of the destination paths, possibly including the file names.
   + Whether you need to set up separate subdirectories.