

# Networking
<a name="networking"></a>

 An Outpost deployment depends on a resilient connection to its anchor AZ for management, monitoring, and service operations to function properly. You should provision your on-premises network to provide redundant network connections for each Outpost rack and reliable connectivity back to the anchor points in the AWS cloud. Also consider network paths between the application workloads running on the Outpost and the other on-premises and cloud systems they communicate with – how will you route this traffic in your network? 

**Topics**
+ [

# Network attachment
](network-attachment.md)
+ [

# Anchor connectivity
](anchor-connectivity.md)
+ [

# Application/workload routing
](applicationworkload-routing.md)

# Network attachment
<a name="network-attachment"></a>

 Each AWS Outposts rack is configured with redundant top-of-rack switches called Outpost Networking Devices (ONDs). The compute and storage servers in each rack connect to both ONDs. You should connect each OND to a separate switch called a Customer Networking Device (CND) in your data center to provide diverse physical and logical paths for each Outpost rack. ONDs connect to your CNDs with one or more physical connections using fiber optic cables and optical transceivers. The [physical connections](https://docs.aws.amazon.com/outposts/latest/userguide/local-network-connectivity.html#physical-connectivity) are configured in logical [link aggregation group (LAG) links](https://docs.aws.amazon.com/outposts/latest/userguide/local-network-connectivity.html#link-aggregation). 

![\[Diagram showing Multi-rack Outpost with redundant network attachments\]](http://docs.aws.amazon.com/whitepapers/latest/aws-outposts-high-availability-design/images/multi-rack-outpost.png)


 The OND to CND links are always configured in a LAG – even if the physical connection is a single fiber optic cable. Configuring the links as LAG groups allow you to increase the link bandwidth by adding additional physical connections to the logical group. The LAG links are configured as IEEE 802.1q Ethernet trunks to enable segregated networking between the Outpost and the on-premises network. 

 Every Outpost has at least two logically segregated networks that need to communicate with or across the customer network: 
+  **Service link network** – allocates the service link IP addresses to the Outpost servers and facilitates communication with the on-premises network to allow the servers to connect back to the Outpost anchor points in the Region. When you have multiple rack implementations in a single logical Outposts, you need to assign a Service Link /26 CIDR for each Rack.
+  **Local Gateway network** – enables communication between the VPC subnets on the Outpost and the on-premises network via the Outpost Local Gateway (LGW). 

 These segregated networks attach to the on-premises network by a set of [point-to-point IP connections](https://docs.aws.amazon.com/outposts/latest/userguide/local-network-connectivity.html#network-layer-connectivity) over the LAG links. Each OND to CND LAG link is configured with VLAN IDs, point-to-point (/30 or /31) IP subnets, and eBGP peering for each segregated network (service link and LGW). You should consider the LAG links, with their point-to-point VLANs and subnets, as layer-2 segmented, routed layer-3 connections. The routed IP connections provide redundant logical paths that facilitate communication between the segregated networks on the Outpost and the on-premises network. 

![\[Diagram showing service link peering\]](http://docs.aws.amazon.com/whitepapers/latest/aws-outposts-high-availability-design/images/service-link-peering.png)


![\[Diagram showing Local Gateway peering\]](http://docs.aws.amazon.com/whitepapers/latest/aws-outposts-high-availability-design/images/page-20-local-gateway-peering.png)


 You should terminate the layer-2 LAG links (and their VLANs) on the directly attached CND switches and configure the IP interfaces and BGP peering on the CND switches. You should not bridge the LAG VLANs between your data center switches. For more information, see [Network layer connectivity](https://docs.aws.amazon.com/outposts/latest/userguide/local-network-connectivity.html#network-layer-connectivity) in the *AWS Outposts User Guide*. 

 Inside a logical multi-rack Outpost, the ONDs are redundantly interconnected to provide highly available network connectivity between the racks and the workloads running on the servers. AWS is responsible for network availability within the Outpost. 

## Recommended practices for highly available network attachment without ACE
<a name="recommended-practices-for-highly-available-network-attachment-no-ace"></a>
+  Connect each Outpost Networking Device (OND) in an Outpost rack to a separate Customer Networking Device (CND) in the data center. 
+  Terminate the layer-2 links, VLANs, layer-3 IP subnets, and BGP peering on the directly attached Customer Networking Device (CND) switches. Do not bridge the OND to CND VLANs between the CNDs or across the on-premises network. 
+  Add links to the Link Aggregation Groups (LAGs) to increase the available bandwidth between the Outpost and the data center. Do not rely on the aggregate bandwidth of the diverse paths through both ONDs. 
+  Use the diverse paths through the redundant ONDs to provide resilient connectivity between the Outpost networks and the on-premises network. 
+ To achieve optimal redundancy and allow for non-disruptive OND maintenance, we recommend that customers configure BGP advertisements and policies as follows:
  + Customer network equipment should receive BGP advertisements from Outpost without changing BGP attributes and to enable BGP multipath/load-balancing to achieve optimal inbound traffic flows (from customer towards Outpost). AS-Path prepending is used for Outpost BGP prefixes to shift traffic away from a particular OND/uplink in case maintenance is required. The customer network should prefer routes from Outpost with AS-Path length 1 over routes with AS-Path length 4, that is, react to AS-Path prepending.
  + The customer network should advertise equal BGP prefixes with the same attributes towards all ONDs in Outpost. By default, the Outpost network load balances outbound traffic (towards the customer) between all uplinks. Routing policies are used on the Outpost side to shift traffic away from a particular OND in case maintenance is required. Equal BGP prefixes from the customer side on all ONDs are required to perform this traffic shift, and perform maintenance in a non-disruptive way. When maintenance is required on the customer’s network, we recommend using AS-Path prepending to temporarily shift away traffic from particular uplink or device.

## Recommended practices for highly available network attachment with ACE
<a name="recommended-practices-for-highly-available-network-attachment-with-ace"></a>

For a multi-rack deployment with four or more compute racks, you must use the Aggregation, Core, Edge (ACE) rack, which will act as a network aggregation point to reduce the number of fiber links to your on-premises networking devices. The ACE rack provides the connectivity to the ONDs in each Outposts rack, so AWS will own the VLAN interface allocation and configuration between ONDs and ACE networking devices.

Isolated network layers for Service Link and Local Gateway networks are still required regardless of whether or not an ACE rack is used, which aim to have a VLAN point-to-point (/30 or /31) IP subnets, and eBGP peering configuration for each segregated network. The proposed architectures should follow any of two architectures:

**Topics**
+ [

### Two customer networking devices
](#two-customer-devices)
+ [

### Four customer networking devices
](#four-customer-devices)

### Two customer networking devices
<a name="two-customer-devices"></a>

![\[Two-customer networking devices\]](http://docs.aws.amazon.com/whitepapers/latest/aws-outposts-high-availability-design/images/page-22-two-customer-networking-devices.png)

+ With this architecture, the customer should have two networking devices (CND) to interconnect the ACE networking devices, providing redundancy.
+ For each physical connections, you must enable a LAG (to increase the available bandwidth between the Outpost and the data center), even if it is a single physical port, and it will carry two network segments, having 2 point-to-point VLANs (/30 or /31), and eBGP configurations between ACEs and CNDs.
+ In a steady state the traffic is load balanced following Equal-cost multipath (ECMP) pattern to/from the customer network from the ACE layer, 25% traffic distribution across the ACE to customer. In order to allow this behavior, the eBGP peering’s between ACEs and CNDs must have BGP multipath/load-balancing enabled, and announced the customer prefixes with the same BGP metric on the 4 eBGP peering connections.
+ To achieve optimal redundancy and allow for non-disruptive OND maintenance, we recommend that customers to follow these recommendations:
  + Customer networking device should advertise equal BGP prefixes with the same attributes towards all ONDs in Outpost.
  + Customer networking device should receive BGP advertisements from Outpost without changing BGP attributes and to enable BGP multipath/load-balancing.

### Four customer networking devices
<a name="four-customer-devices"></a>

![\[Four-customer network devices\]](http://docs.aws.amazon.com/whitepapers/latest/aws-outposts-high-availability-design/images/page-23-four-customer-networking-devices.png)


With this architecture, the customer will have four networking devices (CND) to interconnect the ACE networking devices, providing redundancy and the same networking logic, including VLANs, eBGP, and ECMP applicable to a 2 CND architecture.

# Anchor connectivity
<a name="anchor-connectivity"></a>

 An [Outpost service link](https://docs.aws.amazon.com/outposts/latest/userguide/region-connectivity.html) connects to either public or private anchors (not both) in a specific Availability Zone (AZ) in the Outpost’s parent Region. Outpost servers initiate outbound service link VPN connections from their service link IP addresses to the anchor points in the anchor AZ. These connections use UDP and TCP port 443. AWS is responsible for the availability of the anchor points in the Region. 

 You must ensure the Outpost service link IP addresses can connect through your network to the anchor points in the anchor AZ. The service link IP addresses do not need to communicate with other hosts on your on-premises network. 

 Public anchor points reside in the Region’s [public IP ranges](https://docs.aws.amazon.com/general/latest/gr/aws-ip-ranges.html) (in the EC2 service CIDR blocks) and may be accessed via the internet or [AWS Direct Connect](https://aws.amazon.com/directconnect/) (DX) public virtual interfaces (VIFs). The use of public anchor points allows for more flexible path selection as service link traffic may be routed over any available path that can successfully reach the anchor points on the public internet. 

 Private anchor points allow you to use your IP address ranges for anchor connectivity. Private anchor points are created in a [private subnet within a dedicated VPC](https://docs.aws.amazon.com/outposts/latest/userguide/region-connectivity.html#private-connectivity) using customer-assigned IP addresses. The VPC is created in the AWS account that owns the Outpost resource and you are responsible for ensuring the VPC is available and properly configured. Use a Security Control Policy (SCP) in AWSOrigamiServiceGateway Organizations to prevent users from deleting that Virtual Private Cloud (VPC).Private anchor points must be accessed using [Direct Connect private VIFs](https://aws.amazon.com/blogs/networking-and-content-delivery/introducing-aws-outposts-private-connectivity/). 

 You should provision redundant network paths between the Outpost and the anchor points in the Region with connections terminating on separate devices in more than one location. Dynamic routing should be configured to automatically reroute traffic to alternate paths when connections or networking devices fail. You should provision sufficient network capacity to ensure that the failure of one WAN path does not overwhelm the remaining paths. 

 The following diagram shows three Outposts with redundant network paths to their anchor AZs using AWS Direct Connect as well as public internet connectivity. Outpost A and Outpost B are anchored to different Availability Zones in the same Region. Outpost A connects to private anchor points in AZ 1 of region 1. Outpost B connects to public anchor points in AZ 2 of region 1. Outpost C connects to public anchors in AZ 1 of region 2. 

![\[Diagram showing Highly available anchor connectivity with AWS Direct Connect and public internet access\]](http://docs.aws.amazon.com/whitepapers/latest/aws-outposts-high-availability-design/images/highly-available-anchor-connectivity.png)


 Outpost A has three redundant network paths to reach its private anchor point. Two paths are available through redundant Direct Connect circuits at a single Direct Connect location. The third path is available through a Direct Connect circuit at a second Direct Connect location. This design keeps Outpost A’s service link traffic on private networks and provides path redundancy that allows for failure of any one of the Direct Connect circuits or failure of an entire Direct Connect location. 

 Outpost B has four redundant network paths to reach its public anchor point. Three paths are available through public VIFs provisioned on the Direct Connect circuits and locations used by Outpost A. The fourth path is available through the customer WAN and the public internet. Outpost B’s service link traffic may be routed over any available path that can successfully reach the anchor points on the public internet. Using the Direct Connect paths may provide more consistent latency and higher bandwidth availability, while the public internet path may be used for Disaster Recovery (DR) or bandwidth augmentation scenarios. 

 Outpost C has two redundant network paths to reach its public anchor point. Outpost C is deployed in a different data center than Outposts A and B. Outpost C’s data center does not have dedicated circuits connecting to the customer WAN. Instead, the data center has redundant internet connections provided by two different Internet Service Providers (ISPs). Outpost C’s service link traffic may be routed over either of the ISP networks to reach the anchor points on the public internet. This design allows flexibility to route service link traffic over any available public internet connection. However, the end-to-end path is dependent on public third-party networks where bandwidth availability and network latency fluctuate. 

 The network path between an Outpost and its service link anchor points must meet the following bandwidth specification:
+ 500 Mbps - 1 Gbps of available bandwidth per Outpost rack (for example, 3 racks: 1.5 – 3 Gbps available bandwidth)

## Recommended practices for highly available anchor connectivity
<a name="recommended-practices-for-highly-available-anchor-connectivity"></a>
+  Provision redundant network paths between each Outpost and its anchor points in the Region. 
+  Use Direct Connect (DX) paths to control latency and bandwidth availability. 
+  Ensure that TCP and UDP port 443 are open (outbound) from the Outpost Service Link CIDR blocks to the [EC2 IP address ranges](https://docs.aws.amazon.com/general/latest/gr/aws-ip-ranges.html) in the parent Region. Ensure the ports are open on all network paths. 
+ Keep track of the Amazon EC2 IP address ranges on your firewall if you are using a subset of CIDR ranges for the Region.
+  Ensure each path meets the bandwidth availability and latency requirements. 
+  Use dynamic routing to automate traffic redirection around network failures. 
+  Test routing the service link traffic over each planned network path to ensure the path functions as expected. 

# Application/workload routing
<a name="applicationworkload-routing"></a>

There are two paths out of the Outpost for application workloads:
+ The service link path: Consider that application traffic will compete with Outposts control plane traffic, in addition to limiting the [MTU to 1300 bytes](https://docs.aws.amazon.com/outposts/latest/userguide/region-connectivity.html#service-links).
+ The local gateway (LGW) path: Consider that the customer's local network allows access to both applications that are on-premises and also in the AWS Region.

You configure the Outpost subnet route tables to control which path to take to reach destination networks. Routes pointed to the LGW will direct traffic out the Local Gateway and to the on-premises network. Routes pointed to the services and resources in the Region, such as Internet Gateway, NAT Gateway, Virtual Private Gateway, and TGW, will use [service link](https://docs.aws.amazon.com/outposts/latest/userguide/region-connectivity.html) to reach these targets. If you have a VPC peering connection with multiple VPCs on the same Outpost, the traffic between the VPCs remains on the Outpost and doesn't use the service link back to the Region. For information on VPC peering, see [Connect VPCs using VPC peering](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-peering.html) in the *Amazon VPC User Guide*.

![\[Diagram showing a visualization of the Outpost service link and LGW network paths\]](http://docs.aws.amazon.com/whitepapers/latest/aws-outposts-high-availability-design/images/outpost-service-link-and-lgw-network-paths.png)


 You should take care when planning application routing to consider both normal operation and limited routing and service availability during network failures. The Service Link path is not available when an Outpost is disconnected from the Region. 

 You should provision diverse paths and configure dynamic routing between the Outpost LGW and your critical on-premises applications, systems and users. Redundant network paths allow the network to route traffic around failures and ensure that on-premises resources will be able to communicate with workloads running on the Outpost during partial network failures. 

 Outpost VPC route configurations are static. You configure subnet routing tables through the AWS Management Console, CLI, APIs, and other Infrastructure as Code (IaC) tools; however, you will not be able modify the subnet routing tables during a disconnect event. You will have to reestablish connectivity between the Outpost and the Region to update the route tables. Use the same routes for normal operations as you plan to use during disconnect events. 

 Resources on the Outpost can reach the internet via the service link and an Internet Gateway (IGW) in the Region or via the Local Gateway (LGW) path. Routing internet traffic over the LGW path and the on-premises network allows you to use existing on-premises internet ingress/egress points and may provide lower latency, higher MTUs, and reduced AWS data egress charges when compared to using the service link path to an IGW in the Region. 

 If your application must run on-premises and it needs to be accessible from the public internet, you should route the application traffic over your on-premises internet connection(s) to the LGW to reach the resources on the Outpost. 

 While you can configure subnets on an Outpost like public subnets in the Region, this may be an undesirable practice for most use cases. Inbound internet traffic will come in through the AWS Region and be routed over the service link to the resources running on the Outpost. 

 The response traffic will in turn be routed over the service link and back out through the AWS Region’s internet connections. This traffic pattern may add latency and will incur data egress charges as traffic leaves the Region on its way to the Outpost and as return traffic comes back through the Region and egresses out to the internet. If your application can run in the Region, the Region is the best place to run it. 

 Traffic between VPC resources (in the same VPC) will always follow the local VPC CIDR route and be routed between subnets by the implicit VPC routers. 

 For example, traffic between an EC2 instance running on the Outpost and a VPC Endpoint in the Region will always be routed over the service link. 

![\[Diagram showing local VPC routing through the implicit routers\]](http://docs.aws.amazon.com/whitepapers/latest/aws-outposts-high-availability-design/images/local-vpc-routing-through-implicit-routers.png)


## Recommended practices for application/workload routing
<a name="recommended-practices-for-applicationworkload-routing"></a>
+  Use the Local Gateway (LGW) path instead of the service link path where possible. 
+  Route internet traffic over the LGW path. 
+  Configure the Outpost subnet routing tables with a standard set of routes – they will be used for both normal operations and during disconnect events. 
+  Provision redundant network paths between the Outpost LGW and critical on-premises application resources. Use dynamic routing to automate traffic redirection around on-premises network failures. 