

# Managing outputs in MediaConnect
Managing outputs

Outputs are the different destinations where you want MediaConnect to send the content of your flow. You can add, remove, and disable outputs at any time, even when the flow is active. A disabled output will stop steaming content to its destination and won't incur data transfer costs. These outputs are sent to the IP address that you specify. This option is useful if you intend to send your content to an on-premises encoder.

For transport stream flows, you can [grant an entitlement](entitlements-grant.md) to share your content with another AWS account (subscriber account). When the subscriber creates a flow using your content as the source, AWS Elemental MediaConnect generates an output on your flow.

**Note**  
If you [disable](entitlements-disable.md) an entitlement after the subscriber creates a flow based on that entitlement, the associated output remains on your flow. This output continues to counts toward your maximum number of outputs. To delete an output that's associated with an entitlement, [revoke](entitlements-revoke.md) the entitlement.

**Topics**
+ [

# Using NDI® outputs in a MediaConnect flow
](outputs-using-ndi.md)
+ [

# Adding outputs to a MediaConnect flow
](outputs-add.md)
+ [

# Viewing the list of outputs for a MediaConnect flow
](outputs-view-list.md)
+ [

# Updating outputs on a MediaConnect flow
](outputs-update.md)
+ [

# Managing tags on a MediaConnect output
](outputs-manage-tags.md)
+ [

# Disabling or removing outputs from a MediaConnect flow
](outputs-remove.md)
+ [

# Output destinations
](destinations.md)
+ [

# Determining an output's IP address
](output-ip-address.md)

# Using NDI® outputs in a MediaConnect flow
NDI outputsNDI® outputs

You can now use NDI® outputs to send content from your MediaConnect flow to your NDI environment.

AWS Elemental MediaConnect can convert MPEG transport streams into [Network Device Interface (NDI®)](https://ndi.video/tech/), a protocol for high-quality, low-latency video and audio over IP networks. This capability enables direct content delivery within your network, connecting traditional contribution workflows with IP-based video production systems.

Using NDI outputs, you can create streamlined production workflows that take content from your AVC or HEVC-based encoder, process it through a MediaConnect flow as a transport stream, and output it directly into your Virtual Private Cloud (VPC) as NDI. Your production systems—including vision mixers, audio mixers, replay systems, and graphics engines—can immediately access these NDI streams through standard NDI discovery. This integration works with your existing NDI infrastructure, requiring no modifications to your current VPC setup.

## Key points


### Understanding NDI terminology


In video and audio workflows, the terms *source* and *output* have specific meanings that vary between contexts. Understanding these differences helps you work with NDI outputs across your production workflow.
+ In MediaConnect flows:
  + A *source* is the incoming video and audio feed to the flow. NDI is supported as a source type.
  + An *output* determines where and how your content is delivered. NDI is supported as an output type.
+ In NDI implementation:
  + An NDI source is a network endpoint that sends video and audio streams over IP networks using the NDI protocol. 
  + When you add an NDI output to your MediaConnect flow, MediaConnect acts as an NDI sender by creating an NDI source. Your production systems can then connect to this source as NDI receivers to get the video and audio stream.

In summary: Your MediaConnect flow takes video and audio from a flow source and, with an NDI flow output enabled, it creates an NDI source that your production systems can receive from.

### How NDI outputs work


At a high level, here’s how your content moves through MediaConnect when you use NDI outputs:

1. You create a large sized flow with NDI enabled, configuring your discovery servers and NDI output settings.

1. You send content to the flow source, using supported transport stream protocols such as SRT or Zixi.

1. MediaConnect processes the content to the flow output, creating a discoverable NDI source in your VPC.

1. The production systems in your network can now discover and connect to these endpoints and receive your content.

 This workflow maintains compatibility with existing broadcast infrastructure while adding the flexibility and networking advantages of NDI distribution.

### White screen generation for NDI outputs


When you configure a transport stream flow with NDI outputs, MediaConnect automatically generates white video frames to provide a valid source signal for downstream NDI devices. This helps you confirm that your NDI output is properly configured and functioning, even when your source isn't actively sending content. 

The white frame generation operates as follows: 
+ **On initial flow startup** - If no source content is received within 10 seconds, MediaConnect generates white frames with silent audio on your NDI output.
+ **After a source has connected and started sending content** - If a source disconnects for more than 60 seconds, MediaConnect generates white frames with silent audio.

This feature is particularly useful when you're setting up flows in advance of live events, or in situations where your source content isn't immediately available. The white frames serve as a visual indicator that your NDI output is working correctly and is ready to receive source content. This is more informative than seeing a black screen, which could either indicate a loss of signal or intentional black video content from your source. 

This feature is available exclusively for NDI outputs. You don't need to configure or enable white screen generation - it works automatically whenever your flow is in a running state but isn't receiving source content. When your source starts sending content to your flow, the source content automatically replaces the white frames. MediaConnect stops generating silent audio frames, and the audio passes through from the source.

### Considerations and limitations


When planning your NDI output implementation in MediaConnect, keep in mind the following.

[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/mediaconnect/latest/ug/outputs-using-ndi.html)

### Supported decoding parameters


The following table outlines the supported decoding parameters for NDI outputs in MediaConnect.

For video decoder parameters: the supported bit depth/codecs for AVC should be the same as HEVC. 


| Decoding parameter | Description | 
| --- | --- | 
|  Video codec and chroma sampling profiles  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/mediaconnect/latest/ug/outputs-using-ndi.html) [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/mediaconnect/latest/ug/outputs-using-ndi.html)  | 
|  Audio codec support  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/mediaconnect/latest/ug/outputs-using-ndi.html)  If the source contains multiple audio PIDs, MediaConnect combines all the audio streams. However, this is only possible if the sample rates are the same across all of the PIDs.   | 
|  Supported resolutions  |  Supports resolutions from 480p up to 1080p  | 
|  Scan type  |  Supports both interlaced and progressive formats  | 
|  Frame rates  |  Supports the following frame rates : 23.98, 24, 25, 29.97, 30, 50, 59.94, 60 fps  | 

## Next steps


To get started with NDI outputs, first [create a flow](flows-create.md) with NDI enabled, then [add an NDI output](outputs-add-ndi.md) to your flow.

## Additional resources

+ [Flow sizes and capabilities](flow-sizes-capabilities.md)
+ [Best practices](best-practices.md)

# Adding outputs to a MediaConnect flow
Adding outputsZixi pull outputs

You can now add outputs that use the Zixi pull protocol.

For transport stream flows, you can add up to 50 outputs. However, for optimal performance, follow the guidance offered in [Best practices](best-practices.md). Every output must have a name, a [protocol](protocols.md), an IP address, and a port.

**Note**  
If you intend to set up an entitlement for an output, don't create the output. Instead, [grant an entitlement](entitlements-grant.md). When the subscriber creates a flow using your content as the source, the service creates an output on your flow.

The method you use to add an output to a flow is dependent on the type of output that you want to add:
+ [Standard output (transport stream flow)](outputs-add-standard.md) – Sends compressed content to any destination that is not a virtual private cloud (VPC) that you configured using Amazon Virtual Private Cloud.
+ [VPC output (transport stream flow)](outputs-add-vpc.md) – Sends compressed content to a VPC that you configured using Amazon Virtual Private Cloud.
+ [NDI® output (transport stream flow)](outputs-add-ndi.md) – Sends high-quality, low-latency content over IP networks so that it can be received by the production systems within your VPC network.
+ [VPC output (CDI flow)](outputs-add-vpc.md) – Sends uncompressed content to a VPC that you configured using Amazon Virtual Private Cloud.

# Adding standard outputs to a MediaConnect flow
Adding standard outputs

For transport stream flows, you can add up to 50 outputs. However, for optimal performance, follow the guidance offered in [Best practices](best-practices.md). A standard output goes to any destination that is not part of a virtual private cloud (VPC) that you created using Amazon Virtual Private Cloud.

**Note**  
CDI flows don't support standard outputs.

**To add a standard output to a flow (console)**

1. Open the MediaConnect console at [https://console.aws.amazon.com/mediaconnect/](https://console.aws.amazon.com/mediaconnect/).

1. On the **Flows** page, choose the name of the flow that you want to add an output to.

   The details page for that flow appears. 

1. Choose the **Outputs** tab.

1. Choose **Add output**.

1. For **Name**, specify a name for your output. This value is an identifier that is visible only on the AWS Elemental MediaConnect console and is not visible to the end user.

1. For **Output type**, choose **Standard output**.

1. For **Description**, enter a description that will remind you later where this output is going. This might be the company name or notes about the setup.

1. Determine which protocol you want to use for the output.

1. For specific instructions based on the protocol that you want to use, choose one of the following tabs:

------
#### [ RIST ]

   1. For **Protocol**, choose **RIST**. 

   1. For **IP address**, choose the IP address where you want to send the output.

   1. For **Port**, choose the port that you want to use when the content is distributed to this output. For more information about ports, see [Output destinations](destinations.md).
**Note**  
The RIST protocol requires one additional port for error correction. To accommodate this requirement, AWS Elemental MediaConnect reserves the port that is \$11 from the port that you specify. For example, if you specify port 4000 for the output, the service assigns ports 4000 and 4001.

   1. For **Smoothing latency**, specify the additional delay that you want to use with output smoothing. We recommend that you specify a value of 0 ms to disable smoothing. However, if the receiver can't process the stream properly, specify a value between 100 and 1,000 ms. This way, AWS Elemental MediaConnect attempts to correct jitter from the flow source. If you keep this field blank, the service uses the default value of 0 ms.

------
#### [ RTP or RTP-FEC ]

   1. For **Protocol**, choose **RTP** or **RTP-FEC**. 

   1. For **IP address**, choose the IP address where you want to send the output.

   1. For **Port**, choose the port that you want to use when the content is distributed to this output. For more information about ports, see [Output destinations](destinations.md).
**Note**  
The RTP-FEC protocol requires two additional ports for error correction. To accommodate this requirement, AWS Elemental MediaConnect reserves the ports that are \$12 and \$14 from the port that you specify. For example, if you specify port 4000 for the output, the service assigns ports 4000, 4002, and 4004. 

   1. For **Smoothing latency**, specify the additional delay that you want to use with output smoothing. We recommend that you specify a value of 0 ms to disable smoothing. However, if the receiver can't process the stream properly, specify a value between 100 and 1,000 ms. This way, AWS Elemental MediaConnect attempts to correct jitter from the flow source. If you keep this field blank, the service uses the default value of 0 ms.

------
#### [ SRT listener ]

   1. For **Name**, specify a name for your source. This value is an identifier that is visible only on the MediaConnect console. It is not visible to anyone outside of the current AWS account.

   1. For **Protocol**, choose **SRT listener**. 

   1. For **Minimum latency**, specify the minimum size of the buffer (delay) that you want the service to maintain. A higher latency value means a longer delay in transmitting the stream, but more room for error correction. A lower latency value means a shorter delay, but less room for error correction. You can choose a value from 10–15,000 ms. If you keep this field blank, MediaConnect uses the default value of 2,000 ms.

      The SRT protocol uses a **minimum latency** configuration on each side of the connection. The larger of these two values is used as the *recovery latency*. If the transmitted bitrate, multiplied by the recovery latency, is higher than the *receiver buffer*, the buffer will overflow and the stream can fail with a `Buffer Overflow Error`. On the SRT receiver side, the receiver buffer is configured by the SRTO\$1RCVBUF value. The size of the receiver buffer is limited by the *flow control window size* (SRTO\$1FC) value. On the MediaConnect side, the receiver buffer is calculated as the **maximum bitrate** value multiplied by the **minimum latency** value. For more information about the SRT buffer, see [the SRT Configuration Guidelines.](https://github.com/Haivision/srt/blob/master/docs/API/configuration-guidelines.md)

       

   1. For **CIDR allow list**, specify a range of IP addresses that are allowed to view content from your output. Format the IP addresses as a Classless Inter-Domain Routing (CIDR) block, for example, 10.24.34.0/23. For more information about CIDR notation, see [RFC 4632](https://tools.ietf.org/html/rfc4632).
**Important**  
Specify a CIDR block that is as precise as possible. Include only the IP addresses that you want to contribute content to your flow. If you specify a CIDR block that is too wide, it allows for the possibility of outside parties sending content to your flow.
**Tip**  
To specify an additional CIDR block, choose **Add**. You can specify up to three CIDR blocks.

   1. For **Port**, choose the port that you want to use when the content is distributed to this output. For more information about ports, see [Output destinations](destinations.md).

   1. If you want to encrypt the video as it is sent to this output, do the following:

      1. In the **Encryption** section, choose **Enable**.

      1. **Encryption type** will not be selectable. **srt-password** is the only available encryption for this protocol.

      1. For **Role ARN**, specify the ARN of the role that you created when you [set up encryption](encryption-static-key-set-up.md#encryption-static-key-set-up-create-iam-role).

      1. For **Secret ARN**, specify the ARN that AWS Secrets Manager assigned when you [created the secret to store the SRT password](encryption-srt-password-set-up.md#encryption-srt-password-set-up-password).

------
#### [ SRT caller ]

   1. For **Protocol**, choose **SRT-caller**. 

   1. For **Minimum latency**, specify the minimum size of the buffer (delay) that you want the service to maintain. A higher latency value means a longer delay in transmitting the stream, but more room for error correction. A lower latency value means a shorter delay, but less room for error correction. You can choose a value from 10–15,000 ms. If you keep this field blank, MediaConnect uses the default value of 2,000 ms. 

      The SRT protocol uses a **minimum latency** configuration on each side of the connection. The larger of these two values is used as the *recovery latency*. If the transmitted bitrate, multiplied by the recovery latency, is higher than the *receiver buffer*, the buffer will overflow and the stream can fail with a `Buffer Overflow Error`. On the SRT receiver side, the receiver buffer is configured by the SRTO\$1RCVBUF value. The size of the receiver buffer is limited by the *flow control window size* (SRTO\$1FC) value. On the MediaConnect side, the receiver buffer is calculated as the **maximum bitrate** value multiplied by the **minimum latency** value. For more information about the SRT buffer, see [the SRT Configuration Guidelines.](https://github.com/Haivision/srt/blob/master/docs/API/configuration-guidelines.md)

   1. For **Destination IP address**, enter the IP address or domain of the output's destination.

   1. For **Port**, choose the port that you want to use when the content is distributed to this output. For more information about ports, see [Output destinations](destinations.md).

   1. If you want to encrypt the video as it is sent to this output, do the following:

      1. In the **Encryption** section, choose **Enable**.

      1. **Encryption type** will not be selectable. **Srt-password** is the only available encryption for this protocol.

      1. For **Role ARN**, specify the ARN of the role that you created when you [set up encryption](encryption-static-key-set-up.md#encryption-static-key-set-up-create-iam-role).

      1. For **Secret ARN**, specify the ARN that AWS Secrets Manager assigned when you [created the secret to store the SRT password](encryption-srt-password-set-up.md#encryption-srt-password-set-up-password).

------
#### [ Zixi pull ]

   1. For **Protocol**, choose **Zixi pull**. 

   1. For **Stream ID**, enter the **Stream** value that was configured when you added the input on the Zixi receiver. In the Zixi receiver, this value is found in the **Stream parameters** section.
**Important**  
If you keep this field blank, the service uses the output name as the stream ID. Because the stream ID must match the value that is set in the Zixi receiver, you must specify the stream ID if it is not exactly the same as the output name.

   1. For **Remote ID**, enter the **ID** value that is assigned to the Zixi receiver. In the Zixi receiver, this value is located in the **General** settings menu and is labelled **ID**. The **ID** value can also be found on the Zixi receiver **Status** page.

   1. For **Maximum latency**, specify the size of the buffer (delay) that you want the service to maintain. A higher latency value means a longer delay in transmitting the stream, but more room for error correction. A lower latency value means a shorter delay, but less room for error correction. You can choose a value between 0 and 60,000 ms. If you keep this field blank, the service uses the latency that is set in the receiver.

   1. For **CIDR allow list**, specify a range of IP addresses that are allowed to retrieve content from your source. Format the IP addresses as a Classless Inter-Domain Routing (CIDR) block, for example, 10.24.34.0/23. For more information about CIDR notation, see [RFC 4632](https://tools.ietf.org/html/rfc4632).
**Tip**  
To specify an additional CIDR block, choose **Add**. You can specify up to three CIDR blocks.

   1. If you want to encrypt the video as it is sent to this output, do the following:

      1. In the **Encryption** section, choose **Enable**.

      1. For **Encryption type**, choose **Static key**.

      1. For **Role ARN**, specify the ARN of the role that you created when you [set up encryption](encryption-static-key-set-up.md#encryption-static-key-set-up-create-iam-role).

      1. For **Secret ARN**, specify the ARN that AWS Secrets Manager assigned when you [created the secret to store the encryption key](encryption-static-key-set-up.md#encryption-static-key-set-up-store-key).

      1. For **Encryption algorithm**, choose the type of encryption that you want to use to encrypt the source.

------
#### [ Zixi push ]

   1. For **Protocol**, choose **Zixi push**. 

   1. For **IP address**, choose the IP address where you want to send the output.

   1. For **Port**, choose the port that you want to use when the content is distributed to this output. For more information about ports, see [Output destinations](destinations.md).

   1. For **Stream ID**, enter the stream ID that is set in the Zixi receiver.
**Important**  
If you keep this field blank, the service uses the output name as the stream ID. Because the stream ID must match the value set in the Zixi receiver, you must specify the stream ID if it is not exactly the same as the output name.

   1. For **Maximum latency**, specify the size of the buffer (delay) that you want the service to maintain. A higher latency value means a longer delay in transmitting the stream, but more room for error correction. A lower latency value means a shorter delay, but less room for error correction. You can choose a value between 0 and 60,000 ms. If you keep this field blank, the service uses the default value of 6,000 ms.

   1. If you want to encrypt the video as it is sent to this output, do the following:

      1. In the **Encryption** section, choose **Enable**.

      1. For **Encryption type**, choose **Static key**.

      1. For **Role ARN**, specify the ARN of the role that you created when you [set up encryption](encryption-static-key-set-up.md#encryption-static-key-set-up-create-iam-role).

      1. For **Secret ARN**, specify the ARN that AWS Secrets Manager assigned when you [created the secret to store the encryption key](encryption-static-key-set-up.md#encryption-static-key-set-up-store-key).

      1. For **Encryption algorithm**, choose the type of encryption that you want to use to encrypt the source.

------

1. Choose **Add output**.

**To add an output to a flow (AWS CLI)**

1. Create a JSON file that contains the details of the output that you want to add to the flow.

   The following example shows the structure for the contents of the file:

   ```
   {
       "FlowArn": "arn:aws:mediaconnect:us-east-1:111122223333:flow:1-23aBC45dEF67hiJ8-12AbC34DE5fG:BasketballGame",
       "Outputs": [
           {
               "Description": "RTP-FEC Output",
               "Destination": "192.0.2.12",
               "Name": "RTPOutput",
               "Port": 5020,
               "Protocol": "rtp-fec",
               "SmoothingLatency": 100
           }
       ]
   }
   ```

1. In the AWS CLI, use the `add-flow-output` command:

   ```
   aws mediaconnect add-flow-outputs --flow-arn "arn:aws:mediaconnect:us-east-1:111122223333:flow:1-23aBC45dEF67hiJ8-12AbC34DE5fG:BasketballGame" --cli-input-json file://addFlowOutput.txt --region us-west-2
   ```

   The following example shows the return value:

   ```
   {
       "FlowArn": "arn:aws:mediaconnect:us-east-1:111122223333:flow:1-23aBC45dEF67hiJ8-12AbC34DE5fG:BasketballGame",
       "Outputs": [
           {
               "Name": "RTPOutput",
               "Port": 5020,
               "Transport": {
                   "SmoothingLatency": 100,
                   "Protocol": "rtp-fec"
               },
               "Destination": "192.0.2.12",
               "OutputArn": "arn:aws:mediaconnect:us-east-1:111122223333:output:2-3aBC45dEF67hiJ89-c34de5fG678h:RTPOutput",
               "Description": "RTP-FEC Output"
           }
       ]
   }
   ```

# Adding VPC outputs to a flow
Adding VPC outputsVPC outputs

You can now add an output to send content from your AWS Elemental MediaConnect flow to your VPC without going over the public internet.

A VPC output goes to a virtual private cloud (VPC) that you created using Amazon Virtual Private Cloud.

For transport stream flows, you can add outputs (up to 50) even if the flow is active. For CDI flows, you can add outputs (up to 10) only if the flow is in standby mode. For optimal performance, follow the guidance offered in [Best practices](best-practices.md). 

**To add a VPC output to a flow (console)**

1. Open the MediaConnect console at [https://console.aws.amazon.com/mediaconnect/](https://console.aws.amazon.com/mediaconnect/).

1. On the **Flows** page, choose the name of the flow that you want to add an output to.

   The details page for that flow appears. 

1. Choose the **Outputs** tab.

1. Choose **Add output**.

1. For **Name**, specify a name for your output. This value is an identifier that is visible only on the AWS Elemental MediaConnect console and is not visible to the end user.

1. For **Output type**, choose **VPC output**.

1. For **Protocol**, choose the appropriate protocol.

1. For **Description**, enter a description that will remind you later where this output is going. This might be the company name or notes about the setup.

1. Determine which protocol you want to use for the output. The protocol options are dependent on the flow type.
   + For transport stream flows, the protocol options are: RTP, RTP-FEC, RIST, SRT, and Zixi.
   + For CDI flows, the protocol options are: CDI and ST 2110 JPEG XS.

1. For specific instructions based on the protocol that you want to use, choose one of the following tabs:

------
#### [ RIST ]

   1. For **Protocol**, choose **RIST**. 

   1. For **IP address**, choose the IP address where you want to send the output.

   1. For **Port**, choose the port that you want to use when the content is distributed to this output. For more information about ports, see [Output destinations](destinations.md).
**Note**  
The RIST protocol requires one additional port for error correction. To accommodate this requirement, AWS Elemental MediaConnect reserves the port that is \$11 from the port that you specify. For example, if you specify port 4000 for the output, the service assigns ports 4000 and 4001.

   1. For **Smoothing latency**, specify the additional delay that you want to use with output smoothing. We recommend that you specify a value of 0 ms to disable smoothing. However, if the receiver can't process the stream properly, specify a value between 100 and 1,000 ms. This way, AWS Elemental MediaConnect attempts to correct jitter from the flow source. If you keep this field blank, the service uses the default value of 0 ms.

   1. For **Output to VPC**, choose the name of the VPC interface that you want to send your output to.

------
#### [ RTP or RTP-FEC ]

   1. For **Protocol**, choose **RTP** or **RTP-FEC**. 
**Note**  
RTP and RTP-FEC outputs are compliant with the SMPTE 2022-7 standard. If your downstream receiver supports 2022-7 source merging, RTP and RTP-FEC outputs will be compatible.

   1. For **IP address**, choose the IP address where you want to send the output.

   1. For **Port**, choose the port that you want to use when the content is distributed to this output. For more information about ports, see [Output destinations](destinations.md).
**Note**  
The RTP-FEC protocol requires two additional ports for error correction. To accommodate this requirement, AWS Elemental MediaConnect reserves the ports that are \$12 and \$14 from the port that you specify. For example, if you specify port 4000 for the output, the service assigns ports 4000, 4002, and 4004. 

   1. For **Smoothing latency**, specify the additional delay that you want to use with output smoothing. We recommend that you specify a value of 0 ms to disable smoothing. However, if the receiver can't process the stream properly, specify a value between 100 and 1,000 ms. This way, AWS Elemental MediaConnect attempts to correct jitter from the flow source. If you keep this field blank, the service uses the default value of 0 ms.

   1. For **Output to VPC**, choose the name of the VPC interface that you want to send your output to.

------
#### [ SRT listener ]

   1. For **Name**, specify a name for your source. This value is an identifier that is visible only on the MediaConnect console. It is not visible to anyone outside of the current AWS account.

   1. For **Output type**, select **VPC output**.

   1. For **Protocol**, choose **SRT listener**. 

   1. For **Description**, enter a description that can help you distinguish one output from another. This might be the company name or notes about the setup.

   1. For **Minimum latency**, specify the minimum size of the buffer (delay) that you want the service to maintain. A higher latency value means a longer delay in transmitting the stream, but more room for error correction. A lower latency value means a shorter delay, but less room for error correction. You can choose a value from 10–15,000 ms. If you keep this field blank, MediaConnect uses the default value of 2,000 ms. 

      The SRT protocol uses a **minimum latency** configuration on each side of the connection. The larger of these two values is used as the *recovery latency*. If the transmitted bitrate, multiplied by the recovery latency, is higher than the *receiver buffer*, the buffer will overflow and the stream can fail with a `Buffer Overflow Error`. On the SRT receiver side, the receiver buffer is configured by the SRTO\$1RCVBUF value. The size of the receiver buffer is limited by the *flow control window size* (SRTO\$1FC) value. On the MediaConnect side, the receiver buffer is calculated as the **maximum bitrate** value multiplied by the **minimum latency** value. For more information about the SRT buffer, see [the SRT Configuration Guidelines.](https://github.com/Haivision/srt/blob/master/docs/API/configuration-guidelines.md)

   1. For **Port**, choose the port that you want to use when the content is distributed to this output. For more information about ports, see [Output destinations](destinations.md).

   1. For **Output to VPC**, choose the name of the VPC interface that you want to send your output to.

   1. If you want to encrypt the video as it is sent to this output, do the following:

      1. In the **Encryption** section, choose **Enable**.

      1. For **Role ARN**, specify the ARN of the role that you created when you [set up encryption](encryption-static-key-set-up.md#encryption-static-key-set-up-create-iam-role).

      1. For **Secret ARN**, specify the ARN that AWS Secrets Manager assigned when you [created the secret to store the SRT password](encryption-srt-password-set-up.md#encryption-srt-password-set-up-password).

------
#### [ SRT caller ]

   1. For **Name**, specify a name for your source. This value is an identifier that is visible only on the MediaConnect console. It is not visible to anyone outside of the current AWS account.

   1. For **Output type**, select **VPC output**.

   1. For **Protocol**, choose **SRT caller**.

   1. For **Description**, enter a description that can help you distinguish one output from another. This might be the company name or notes about the setup.

   1. For **Minimum latency**, specify the minimum size of the buffer (delay) that you want the service to maintain. A higher latency value means a longer delay in transmitting the stream, but more room for error correction. A lower latency value means a shorter delay, but less room for error correction. You can choose a value from 10–15,000 ms. If you keep this field blank, MediaConnect uses the default value of 2,000 ms. 

      The SRT protocol uses a **minimum latency** configuration on each side of the connection. The larger of these two values is used as the *recovery latency*. If the transmitted bitrate, multiplied by the recovery latency, is higher than the *receiver buffer*, the buffer will overflow and the stream can fail with a `Buffer Overflow Error`. On the SRT receiver side, the receiver buffer is configured by the SRTO\$1RCVBUF value. The size of the receiver buffer is limited by the *flow control window size* (SRTO\$1FC) value. On the MediaConnect side, the receiver buffer is calculated as the **maximum bitrate** value multiplied by the **minimum latency** value. For more information about the SRT buffer, see [the SRT Configuration Guidelines.](https://github.com/Haivision/srt/blob/master/docs/API/configuration-guidelines.md)

   1. For **Destination IP address**, enter the IP address or domain of the output's destination.

   1. For **Port**, choose the port that you want to use when the content is distributed to this output. For more information about ports, see [Output destinations](destinations.md).

   1. For **Output to VPC**, choose the name of the VPC interface that you want to send your output to.

   1. If you want to encrypt the video as it is sent to this output, do the following:

      1. In the **Encryption** section, choose **Enable**.

      1. **Encryption type** will not be selectable. **Srt-password** is the only available encryption for this protocol.

      1. For **Role ARN**, specify the ARN of the role that you created when you [set up encryption](encryption-static-key-set-up.md#encryption-static-key-set-up-create-iam-role).

      1. For **Secret ARN**, specify the ARN that AWS Secrets Manager assigned when you [created the secret to store the SRT password](encryption-srt-password-set-up.md#encryption-srt-password-set-up-password).

------
#### [ Zixi push ]

   1. For **Protocol**, choose **Zixi push**. 

   1. For **IP address**, choose the IP address where you want to send the output.

   1. For **Port**, choose the port that you want to use when the content is distributed to this output. For more information about ports, see [Output destinations](destinations.md).

   1. For **Stream ID**, enter the stream ID that is set in the Zixi receiver.
**Important**  
If you keep this field blank, the service uses the output name as the stream ID. Because the stream ID must match the value set in the Zixi receiver, you must specify the stream ID if it is not exactly the same as the output name.

   1. For **Maximum latency**, specify the size of the buffer (delay) that you want the service to maintain. A higher latency value means a longer delay in transmitting the stream, but more room for error correction. A lower latency value means a shorter delay, but less room for error correction. You can choose a value between 0 and 60,000 ms. If you keep this field blank, the service uses the default value of 6,000 ms.

   1. For **Output to VPC**, choose the name of the VPC interface that you want to send your output to.

   1. If you want to encrypt the video as it is sent to this output, do the following:

      1. In the **Encryption** section, choose **Enable**.

      1. For **Encryption type**, choose **Static key**.

      1. For **Role ARN**, specify the ARN of the role that you created when you [set up encryption](encryption-static-key-set-up.md#encryption-static-key-set-up-create-iam-role).

      1. For **Secret ARN**, specify the ARN that AWS Secrets Manager assigned when you [created the secret to store the encryption key](encryption-static-key-set-up.md#encryption-static-key-set-up-store-key).

      1. For **Encryption algorithm**, choose the type of encryption that you want to use to encrypt the source.

------
#### [ CDI ]

   1. For **Protocol**, choose **CDI**. 

   1. For **IP address**, choose the IP address where you want to send the output.

   1. For **Port**, choose the port that you want to use when the content is distributed to this output. For more information about ports, see [Output destinations](destinations.md).

   1. For **VPC interface**, choose the name of the VPC interface that you want to send your output to.

   1. For each media stream that you want to send as part of the output, do the following:

      1. For **Media stream name**, choose the name of the media stream. You can only add the media streams that the source on your flow uses.

      1. For **Encoding name**, confirm the default value, which is pre-selected based on the media stream type.

      1. For **FMT**, specify the format type number (sometimes referred to as *RTP payload type*) of the media stream. This value should be in a format that the receiver recognizes.

------
#### [ ST 2110 JPEG XS ]

   1. For **Protocol**, choose **ST 2110 JPEG XS**. 

   1. For **VPC interface 1**, choose one of the VPC interfaces that you want to send content to and then choose the specific IP address where you want to send the output.

   1. For **VPC interface 2**, choose a second VPC interface that you want to send content to and then choose the specific IP address where you want to send the output. There is no priority between VPC interfaces 1 and 2.

   1. For each media stream that you want to send as part of the output, do the following:

      1. For **Media stream name**, choose the name of the media stream. You can only add the media streams that the source on your flow uses.

      1. For **Encoding name**, choose the format that was used to encode the data.
         + For ancillary data streams, set the encoding name to **smpte291**.
         + For audio streams, set the encoding name to **pcm**.
         + For video, set the encoding name to **jxsv**.

      1. For **Port**, choose the port that you want to use when the content is distributed to this output. For more information about ports, see [Output destinations](destinations.md).

      1. For **Encoder profile**, choose a setting for the compression. This property only applies if the source uses the CDI protocol. 

      1. For **Compression factor**, specify a value that you want the service to use when calculating the compression for the output. Valid values are floating point numbers in the range of 3.0 to 10.0, inclusive The bitrate of the output is calculated as follows:

         Output bitrate = (1 / compressionFactor) \$1 (source bitrate)

         This property only applies if the source uses the CDI protocol.

   1. Choose **Add output**.

------

# Adding an NDI® output to a MediaConnect flow
Adding NDI outputs

This procedure walks you through the process of setting up an NDI® output and configuring how your NDI video streams appear to other devices in your VPC network. After you have the necessary prerequisites in place, you can add an NDI output to your MediaConnect flow, allowing you to distribute your video and audio streams over the NDI protocol within your VPC.

**Note**  
CDI flows don't support NDI outputs.

## Prerequisites


We recommend reviewing the [NDI outputs](outputs-using-ndi.md) documentation to familiarize yourself with this feature before getting started.

Before you can add NDI outputs to a flow, make sure you have the following resources in place:

**Large MediaConnect flow with NDI configuration enabled**  
+ If you haven't created a flow yet, you'll need to [create a transport stream flow](https://docs.aws.amazon.com/mediaconnect/latest/ug/flows-create.html). When you create the flow, you must set the size to large and make sure that NDI support is enabled. 
+ The flow can be in ACTIVE or STANDBY status before you add an NDI output.

**Network infrastructure**  
+ **VPC** - You'll need a Virtual Private Cloud (VPC). For a quick start, you can use the [AWS CloudFormation VPC template](https://docs.aws.amazon.com/vpc/latest/userguide/create-vpc.html) to automatically create a VPC with public and private subnets. For more information about VPCs, see the [Amazon VPC User Guide](https://docs.aws.amazon.com/vpc/latest/userguide/). 
+ **Discovery servers** - NDI discovery servers must already be provisioned in your VPC network. MediaConnect connects to these servers, but it doesn't create them for you. AWS provides guidance for automatically deploying NDI discovery servers using AWS CloudFormation, including best practices for installation and configuration. For instructions, see [Setting Up NDI Discovery Servers for Broadcast Workflows](https://aws.amazon.com/solutions/guidance/programmatic-deployment-of-ndi-discovery-servers-for-broadcast-workflows-on-aws/).
+ **Security groups** - To enable NDI functionality, we recommend that you configure your security groups with a self-referencing ingress rule and egress rule. You can then attach this security group to the EC2 instances where your NDI servers are running within the VPC. This approach automatically allows all necessary NDI communication between components in your VPC, and all required network traffic is permitted. For guidance on setting up self-referencing security group rules, see [Security Group Referencing](https://docs.aws.amazon.com/vpc/latest/userguide/security-group-rules.html#security-group-referencing) in the Amazon VPC User Guide.
+ In the following procedure, you’ll need to know your NDI server private IP address and your VPC subnet ID.

## Procedure


Follow these steps to set up an NDI output and configure how your NDI video and audio streams appear to other devices in your VPC network.

**To add an NDI output to a flow (console)**

1. Open the MediaConnect console at [https://console.aws.amazon.com/mediaconnect/](https://console.aws.amazon.com/mediaconnect/).

1. On the **Flows** page, choose the name of the flow that you want to add an output to.

1. On the flow details page under **Flow size**, make sure the size is set to **Large**. 

1. On the flow details page under **NDI configuration**, configure your settings as follows:

   1. Set **Flow NDI support** to **Enabled** if it’s not already.

   1. (Optional) Enter an **NDI machine name**.
   + This name is used as a prefix to help you identify the NDI sources that your flow creates. For example, if you enter **MACHINENAME**, your NDI sources will appear as **MACHINENAME** `(ProgramName)`.
   + If you leave this blank, MediaConnect generates a unique 12-character ID as the prefix. This ID is derived from the flow's Amazon Resource Name (ARN), so the machine name references the flow resource.
**Tip**  
Thoughtful naming is especially important when you have multiple flows creating NDI sources. For example, a production environment with 100 NDI sources would benefit from clear, descriptive machine name prefixes like `STUDIO-A`, `STUDIO-B`, `NEWSROOM`, and so on. 

    c. Add up to three **NDI discovery servers**. For each server, provide the following information:
   + Enter the private IP address that's resolvable within the VPC subnet where the VPC adapter is pointed. This should be a private IP, not a public IP address.
   + Select the VPC interface adapter to control network access.
   + (Optional) Specify a port number. If you leave this blank, MediaConnect uses the NDI discovery server default of TCP-5959.
**Note**  
DNS names aren't currently supported for discovery servers.
**Tip**  
You can add up to three discovery servers. Having multiple discovery servers improves reliability and helps ensure your NDI sources are discoverable across your network.

1. Choose the **Outputs** tab.

1. Choose **Add output**.

1. For **Name**, specify a name for your output. This value is an identifier that is visible only on the AWS Elemental MediaConnect console and is not visible to the end user.

1. For **Output type**, choose **NDI output**.

1. For **NDI codec**, choose **SpeedHQ**.

1. For **NDI SpeedHQ quality**, enter a value between 100-200. 
   + This setting adjusts the NDI encoder's target bitrate as a percentage of the default. 
   + The default value is 100, which uses the standard NDI bitrate. Values up to 200 increase the target bitrate proportionally (for example, 200 doubles it).
**Note**  
Some kinds of content (such as high-motion sports) will benefit from a higher quality setting. However, keep in mind that using higher quality settings limits the total number of outputs that a flow can generate (up to 2.5 Gbps).

1. (Optional) Enter an **NDI program name**.
   + This name is used as a suffix to help you identify the NDI sources that your flow creates. For example, if you enter **MyNDIProgram**, your NDI sources will appear as `MACHINENAME` **(MyNDIProgram)**.
   + If you leave this blank, MediaConnect uses the name of the output.
**Tip**  
Thoughtful naming is especially important when you have multiple flows creating NDI sources. For example, in a production environment, you might use names like `MainCam`, `BackupCam`, `GraphicsOutput`, and so on to clearly identify different video feeds from the same machine.

1. Choose **Add output**.

## Next steps


After you [start your flow](flows-start.md), you should be able to see the MediaConnect NDI flow output as an available NDI source in your discovery server. You can then subscribe to it to receive NDI traffic. For more information, see the [NDI documentation](https://docs.ndi.video/all/developing-with-ndi/introduction).

# Viewing the list of outputs for a MediaConnect flow
Viewing outputs

You can view a list of a flow's outputs, along with the setup that is associated with each output. This list includes outputs that you added, as well as outputs that AWS Elemental MediaConnect added when subscribers create flows based on entitlements that you granted.

**To view a list of outputs on an existing flow (console)**

1. Open the MediaConnect console at [https://console.aws.amazon.com/mediaconnect/](https://console.aws.amazon.com/mediaconnect/).

1. On the **Flows** page, choose the name of the flow that you want to view.

   The details page for that flow appears.

1. Choose the **Outputs** tab.

   A list of outputs for that flow appears.

**To view a list of outputs on an existing flow (AWS CLI)**
+ In the AWS CLI, use the `describe-flow` command:

  ```
  aws mediaconnect describe-flow --flow-arn "arn:aws:mediaconnect:us-east-1:111122223333:flow:1-23aBC45dEF67hiJ8-12AbC34DE5fG:BasketballGame" --region us-east-1 --profile PMprofile
  ```

  The return value shows the details of the entire flow, including all the outputs. The following example shows the return value:

  ```
  {
    "Flow": {
      "AvailabilityZone": "us-east-1d",
      "Entitlements": [],
      "FlowArn": "arn:aws:mediaconnect:us-east-1:111122223333:flow:1-23aBC45dEF67hiJ8-12AbC34DE5fG:BasketballGame",
      "Name": "BasketballGame",
      "Outputs": [
        {
          "Address": "192.0.2.12",
          "Description": "RTP-FEC Output",
          "Name": "NYCOutput",
          "OutputArn": "arn:aws:mediaconnect:us-east-1:111122223333:output:2-3aBC45dEF67hiJ89-c34de5fG678h:NYCOutput",
          "Port": 5020,
          "Protocol": "rtp-fec"
        },
        {
          "Address": "198.51.100.8",
          "Description": "RTP Output",
          "Name": "DCOutput",
          "OutputArn": "arn:aws:mediaconnect:us-east-1:111122223333:output:2-987655dEF67hiJ89-c34de5fG678h:DCOutput",
          "Port": 5110,
          "Protocol": "rtp"
        }
      ],
      "Source": {
        "IngestIp": "195.51.100.21",
        "IngestPort": 5010,
        "Name": "BasketballGameSource",
        "Protocol": "rtp-fec",
        "SourceArn": "arn:aws:mediaconnect:us-east-1:111122223333:source:3-4aBC56dEF78hiJ90-4de5fG6Hi78Jk:BasketballGameSource",
        "AllowlistCidr": "10.24.34.0/23"
      },
      "Status": "STANDBY"
    }
  }
  ```

# Updating outputs on a MediaConnect flow
Updating outputs

You can update outputs on a flow, even when the flow is active.

**Important**  
For NDI**®** outputs, you can update the machine name, program name, and discovery server address. However, it's important to avoid changing the machine name or program name for an active NDI output, as your downstream receivers rely on these details to maintain their connections. When you change the machine or program name, your downstream receivers must re-establish connections with the new machine and program names.

**To update an output on a flow (console)**

1. Open the MediaConnect console at [https://console.aws.amazon.com/mediaconnect/](https://console.aws.amazon.com/mediaconnect/).

1. On the **Flows** page, choose the name of the flow that is associated with the output that you want to update.

1. Choose the **Outputs** tab.

   A list of outputs for that flow appears.

1. Choose the output that you want to update.

1. Choose **Update**.

1. Make the appropriate changes, and then choose **Save**.

**To update a flow output (AWS CLI)**
+ In the AWS CLI, use the `update-flow-output` command:

  ```
  aws mediaconnect update-flow-output --flow-arn "arn:aws:mediaconnect:us-east-1:111122223333:flow:1-23aBC45dEF67hiJ8-12AbC34DE5fG:BasketballGame" --output-arn "arn:aws:mediaconnect:us-east-1:111122223333:output:2-3aBC45dEF67hiJ89-c34de5fG678h:NYCfeed" --port 5040 --region us-east-1 --profile PMprofile
  ```

  The following example shows the return value:

  ```
  {
    "FlowArn": "arn:aws:mediaconnect:us-east-1:111122223333:flow:1-23aBC45dEF67hiJ8-12AbC34DE5fG:BasketballGame",
    "Output": {
      "Address": "192.0.2.12",
      "Encryption": {
        "Algorithm": "aes256",
        "KeyType": "static-key",
        "RoleArn": "arn:aws:iam::111122223333:role/AllowMediaConnect",
        "SecretArn": "arn:aws:secretsmanager:us-west-2:111122223333:secret:SECRETID"
      },
      "Name": "Output1",
      "OutputArn": "arn:aws:mediaconnect:us-east-1:111122223333:output:2-3aBC45dEF67hiJ89-c34de5fG678h:Output1",
      "Port": 5040,
      "Protocol": "rtp-fec"
    }
  }
  ```

# Managing tags on a MediaConnect output
Managing tags on an output

You can use tags to help you track the billing and organization for your AWS Elemental MediaConnect outputs. These are the same tags that AWS Billing and Cost Management provides for organizing your AWS bill. For more information about using tags for cost allocation, see [Use Cost Allocation Tags for Custom Billing Reports](https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/allocation.html) in the *AWS Billing User Guide*. 

**Topics**
+ [

# Adding tags on a MediaConnect output
](outputs-manage-tags-add.md)
+ [

# Editing tags on a MediaConnect output
](outputs-manage-tags-edit.md)
+ [

# Removing tags from a MediaConnect output
](outputs-manage-tags-remove.md)

# Adding tags on a MediaConnect output
Adding tags on an output

Use tags to help you track the billing and organization for your AWS Elemental MediaConnect outputs. For more information about using tags for cost allocation, see [Use Cost Allocation Tags for Custom Billing Reports](https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/allocation.html) in the *AWS Billing User Guide*.

**To add tags to an output (console)**

1. Open the MediaConnect console at [https://console.aws.amazon.com/mediaconnect/](https://console.aws.amazon.com/mediaconnect/).

1. On the **Flows** page, choose the name of the flow that is associated with the output that you want to add tags to.

1. Choose the **Outputs** tab.

   A list of outputs for that flow appears.

1. Choose the output that you want to add tags to.

1. Choose **Manage tags**.

1. Choose **Manage tags** again, and then choose **Add tag**.

1. For each tag that you want to add, do the following:

   1. Enter a key and a value. For example, your key can be **sports** and your value can be **golf**. 

   1. Choose **Add tag**.

1. Choose **Update**.

# Editing tags on a MediaConnect output
Editing tags on an output

Use tags to help you track the billing and organization for your AWS Elemental MediaConnect outputs. For more information about using tags for cost allocation, see [Use Cost Allocation Tags for Custom Billing Reports](https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/allocation.html) in the *AWS Billing User Guide*.

**To edit tags on an output (console)**

1. Open the MediaConnect console at [https://console.aws.amazon.com/mediaconnect/](https://console.aws.amazon.com/mediaconnect/).

1. On the **Flows** page, choose the name of the flow that is associated with the output that you want to edit tags for.

1. Choose the **Outputs** tab.

   A list of outputs for that flow appears.

1. Choose the output that you want to edit tags for.

1. In the **Details** section, choose **Manage tags**.

1. Choose **Manage tags** again.

1. Update the tags, as needed.

1. Choose **Update**.

# Removing tags from a MediaConnect output
Removing tags from an output

You can remove a tag from an output if you no longer want to use it to track the billing and organization for it.

**To remove tags from an output (console)**

1. Open the MediaConnect console at [https://console.aws.amazon.com/mediaconnect/](https://console.aws.amazon.com/mediaconnect/).

1. On the **Flows** page, choose the name of the flow that is associated with the output that you want to remove tags from.

1. Choose the **Outputs** tab.

   A list of outputs for that flow appears.

1. Choose the output that you want to remove tags from.

1. In the **Details** section, choose **Manage tags**.

1. Choose **Manage tags** again.

1. Choose **Remove tag** next to each tag that you want to delete.

1. Choose **Update**.

# Disabling or removing outputs from a MediaConnect flow
Disabling or removing outputsOutput disabling

You can now disable a flow's outputs. A disabled output will stop streaming content and will not incur data transfer costs.

You can disable or remove outputs that you added to the flow. If AWS Elemental MediaConnect generated the output as the result of an entitlement, you must [revoke the entitlement](entitlements-revoke.md).

Disabling an output stops the streaming of content to the output destination, but remains attached to the flow. A disabled output does not incur data transfer costs.

**To disable an output (console)**

1. Open the MediaConnect console at [https://console.aws.amazon.com/mediaconnect/](https://console.aws.amazon.com/mediaconnect/).

1. On the **Flows** page, choose the name of the flow that is associated with the output that you want to disable.

   The details page for that flow appears. 

1. Choose the **Outputs** tab.

1. Choose the output, and then choose **Update**.

1. In the **Update output **window, use the **Output status** toggle to disable or enable the selected output.

1. Choose **Save** to save your changes.

**To disable an output (AWS CLI)**
+ In the AWS CLI, use the `update-flow-output` command with the `--output-status DISABLED` option to disable the output. Alternately, you can use `--output-status ENABLED` to enable a disabled output.

  ```
  aws mediaconnect update-flow-output --flow-arn "arn:aws:mediaconnect:us-east-1:111122223333:flow:1-23aBC45dEF67hiJ8-12AbC34DE5fG:BasketballGame" --output-arn "arn:aws:mediaconnect:us-east-1:111122223333:output:2-3aBC45dEF67hiJ89-c34de5fG678h:Output1" --output-status DISABLED
  ```

  The following example shows the return value:

  ```
  {
      "FlowArn": "arn:aws:mediaconnect:us-east-1:111122223333:flow:1-23aBC45dEF67hiJ8-12AbC34DE5fG:BasketballGame",
      "Output": {
          "Destination": "192.0.2.12",
          "Name": "NYCOutput",
          "OutputArn": "arn:aws:mediaconnect:us-east-1:111122223333:output:2-3aBC45dEF67hiJ89-c34de5fG678h:NYCOutput",
          "OutputStatus": "DISABLED",
          "Port": 5020,
          "Transport": {
              "MinLatency": 1000,
              "Protocol": "rtp-fec"
          }
      }
  }
  ```

**To remove an output from a flow (console)**

1. Open the MediaConnect console at [https://console.aws.amazon.com/mediaconnect/](https://console.aws.amazon.com/mediaconnect/).

1. On the **Flows** page, choose the name of the flow that is associated with the output that you want to remove.

   The details page for that flow appears. 

1. Choose the **Outputs** tab.

1. Choose the output, and then choose **Remove**.

**To remove an output from a flow (AWS CLI)**
+ In the AWS CLI, use the `remove-flow-output` command:

  ```
  aws mediaconnect remove-flow-output --flow-arn "arn:aws:mediaconnect:us-east-1:111122223333:flow:1-23aBC45dEF67hiJ8-12AbC34DE5fG:BasketballGame" --output-arn "arn:aws:mediaconnect:us-east-1:111122223333:output:2-3aBC45dEF67hiJ89-c34de5fG678h:Output1" --region us-west-2
  ```

  The following example shows the return value:

  ```
  {
      "FlowArn": "arn:aws:mediaconnect:us-east-1:111122223333:flow:1-23aBC45dEF67hiJ8-12AbC34DE5fG:BasketballGame",
      "OutputArn": "arn:aws:mediaconnect:us-east-1:111122223333:output:2-3aBC45dEF67hiJ89-c34de5fG678h:Output1"
  }
  ```

# Output destinations
Output destinations

Each output on a flow must be sent to a different destination. The parameters that define the destination depend on the protocol, but every protocol uses a compound identifier for the destination. For example, multiple outputs can point to the same destination IP address, as long as none of their ports overlap. Likewise, multiple outputs can point to the same stream ID as long as their remote IDs are different. The following table lists how each protocol defines the destination.

**Note**  
Some protocols require additional ports for error correction. For outputs that use these protocols, AWS Elemental MediaConnect automatically reserves the additional ports. The protocol defines specifically which ports must be reserved. For example, some protocols require port\$12 and port\$14 for error correction. If you specify port 5000 for the output, the service assigns ports 5000, 5002, and 5004.


****  

| Protocol | Destination definition | Ports required | 
| --- | --- | --- | 
| CDI | Ports for each media stream |  The ports that you specify for each media stream. These are the only ports needed for the output.  | 
| NDI® |  VPC and discovery server configuration  |  The ports that you specify for each media stream. If you don't specify a custom port, MediaConnect uses the default NDI discovery protocol (TCP-5959) to announce NDI sources on your network.  | 
| RIST | IP address, port, and port\$11 |  The port that you specify, plus one additional port. The service automatically reserves a port that is \$11 from the port that you specified. For example, if you specify port 3000 for this output, the service also reserves port 3001.  | 
| RTP | IP address and port | The port that you specify. This is the only port needed for the output. | 
| RTP-FEC | IP address, port, port\$12, and port\$14 |  The port that you specify, plus two additional ports. The service automatically reserves ports that are \$12 and \$14 from the port that you specified. For example, if you specify port 2000 for this output, the service also reserves ports 2002 and 2004 for error correction.  | 
| SRT listener | CIDR allow list and port | The port that you specify. This is the only port needed for the output. | 
| SRT caller | IP address and port | The port that you specify. This is the only port needed for the output. | 
| ST 2110 JPEG XS | Ports for each media stream |  The ports that you specify for each media stream. These are the only ports needed for the output.  | 
| Zixi pull | Stream ID, remote ID, and CIDR allow list | The service automatically uses port 2077 for these outputs. | 
| Zixi push | IP address, stream ID, and port | The port that you specify is the only port needed for the output. | 

# Determining an output's IP address
Determining an output's IP addressListener address

For flows that use listener protocols, you can now easily locate an output's outbound IP address for a private internet.Output peer IP address

You can now view the current and historical peer IP addresses for your flow outputs.

When you're working with outputs on standard flows, there are two important IP addresses to understand:
+ **Output IP address** - This is the MediaConnect endpoint address where your downstream receivers connect to. You’ll need this address when you configure your receivers to connect to your flow and start receiving its content.
+ **Output peer IP address** - This is the IP address of the device that’s currently receiving content from your output. This address is useful for troubleshooting connectivity issues and monitoring which devices are actively connected to your output.

The following sections explain how to find the IP address and the peer IP address for your flow outputs.

## Finding an output’s IP address


You can view the IP address for each of your flow outputs in the MediaConnect console, or by using the [DescribeFlow](https://docs.aws.amazon.com/mediaconnect/latest/api/API_DescribeFlow.html) API operation.

**To determine an output's IP address**

1. On the **Flows** page, choose the name of the flow that you want to view.

1. For specific instructions based on how content is sent to your output, choose one of the following tabs:

------
#### [ Public internet ]

   1. In the **Details** section, note the **Public Outbound IP address**. This is the IP address that the receiver needs.

------
#### [ Private internet ]

   1. Choose the **Outputs** tab, and then find the output that you want to view.

   1. Under **Listener address** for that output, note the IP address. This is the IP address that the receiver needs.

------

## Finding an output’s peer IP address


You can view the current peer IP address for each of your flow outputs in the MediaConnect console, or by using the [DescribeFlow](https://docs.aws.amazon.com/mediaconnect/latest/api/API_DescribeFlow.html) API operation.

**To determine an output’s peer IP address**

1. On the **Flows** page, choose the name of the flow that you want to view.

1. Choose the **Outputs** tab.

1. Select the output that you want to view, and then choose **Details**.

1. Under **Peer IP Address**, note the peer IP address.

**Note**  
For certain types of outputs, the peer IP address matches the IP address that you configured during setup.  
 These include:  
SRT Caller outputs
RTP/FEC outputs
Zixi Push outputs
For these output types, the peer IP is effectively static because you pre-configure the destination IP address. However, MediaConnect still reports these addresses for consistency and to provide a complete picture of your flow's configuration.  
For other protocols (like RIST outputs and SRT Listener outputs), the peer IP address is dynamic and shows the current address of the device that's receiving traffic from your output.

### Important information about peer IP addresses


**Peer IP display and updates**
+ For troubleshooting purposes, MediaConnect shows the latest IP address information in near real-time. 
+ Although most updates happen quickly, it might take up to 20 seconds for peer IP address changes to be reflected in the console and API responses.
+ Only the current peer IP address is displayed. Historical records aren't currently available.
+  The peer IP address might not be visible for flows that haven't been started yet, or flows that were started before May 2025. In these cases, you might need to restart your flow to see the peer IP information.

**Supported protocols and output types**
+ Peer IP addresses are shown for most protocols, including:
  + Pre-configured connections (like SRT Caller or Zixi Push outputs)
  + Dynamic connections (like RTP sources or SRT Listener) 
+ Peer IP addresses aren't available for: 
  + Entitlements
  + Managed (MediaLive) outputs
  + CDI/ST2110 outputs
  + NDI outputs