

# Centralized egress to internet
<a name="centralized-egress-to-internet"></a>

As you deploy applications in your multi-account environment, many apps will require outbound-only internet access (for example, downloading libraries, patches, or OS updates). This can be achieved for both IPv4 and IPv6 traffic. For IPv4, this can achieved through network address translation (NAT) in the form of a NAT gateway (recommended), or alternatively, a self-managed NAT instance running on an Amazon EC2 instance, as a means for all egress internet access. Internal applications reside in private subnets, while NAT Gateways and Amazon EC2 NAT instances reside in a public subnet.

AWS recommends that you use NAT gateways because they provide better availability and bandwidth and require less eﬀort on your part to administer. For more information, refer to [Compare NAT gateways and NAT instances](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-comparison.html).

For IPv6 traffic, egress traffic can be configured to leave each VPC through an egress only internet gateway in a decentralized manner or it can be configured to be sent to a centralized VPC using NAT instances or proxy instances. The IPv6 patterns are discussed in [Centralized egress for IPv6](centralized-egress-for-ipv6.md).

**Topics**
+ [Using the NAT gateway for centralized IPv4 egress](using-nat-gateway-for-centralized-egress.md)
+ [Using the NAT gateway with AWS Network Firewall for centralized IPv4 egress](using-nat-gateway-with-firewall.md)
+ [Using the NAT gateway and Gateway Load Balancer with Amazon EC2 instances for centralized IPv4 egress](using-nat-gateway-and-gwlb-with-ec2.md)
+ [Centralized egress for IPv6](centralized-egress-for-ipv6.md)

# Using the NAT gateway for centralized IPv4 egress
<a name="using-nat-gateway-for-centralized-egress"></a>

NAT gateway is a managed network address translation service. Deploying a NAT gateway in every spoke VPC can become cost prohibitive because you pay an hourly charge for every NAT gateway you deploy (refer to [Amazon VPC pricing](http://aws.amazon.com/vpc/pricing/)). Centralizing NAT gateways can be a viable option to reduce costs. To centralize, you create a separate egress VPC in the network services account, deploy NAT gateways in the egress VPC, and route all egress traffic from the spoke VPCs to the NAT gateways residing in the egress VPC using Transit Gateway or CloudWAN, as shown in the following figure.

**Note**  
When you centralize NAT gateway using Transit Gateway, you pay an extra Transit Gateway data processing charge — compared to the decentralized approach of running a NAT gateway in every VPC. In some edge cases when you send huge amounts of data through NAT gateway from a VPC, keeping the NAT local in the VPC to avoid the Transit Gateway data processing charge might be a more cost-effective option. 

![\[A diagram depicting a decentralized high availability NAT gateway architecture\]](http://docs.aws.amazon.com/whitepapers/latest/building-scalable-secure-multi-vpc-network-infrastructure/images/decentralized-ha-nat-gateway.png)


![\[A diagram depicting a centralized NAT gateway using Transit Gateway (overview)\]](http://docs.aws.amazon.com/whitepapers/latest/building-scalable-secure-multi-vpc-network-infrastructure/images/centralized-nat-gateway-tg.png)


![\[A diagram depicting a centralized NAT gateway using Transit Gateway (route table design)\]](http://docs.aws.amazon.com/whitepapers/latest/building-scalable-secure-multi-vpc-network-infrastructure/images/nat-gateway-tg-rt.png)


In this setup, spoke VPC attachments are associated with Route Table 1 (RT1) and are propagated to Route Table 2 (RT2). There is a [Blackhole](https://docs.aws.amazon.com/vpc/latest/tgw/tgw-route-tables.html) route to disallow the two VPCs from communicating with each other. If you want to allow inter-VPC communication, you can remove the `10.0.0.0/8 -> Blackhole` route entry from RT1. This allows them to communicate via the transit gateway. You can also propagate the spoke VPC attachments to RT1 (or alternatively, you can use one route table and associate/propagate everything to that), enabling direct traffic ﬂow between the VPCs using Transit Gateway.

You add a static route in RT1 pointing all traffic to egress VPC. Because of this static route, Transit Gateway sends all internet traffic through its ENIs in the egress VPC. Once in the egress VPC, traffic follows the routes deﬁned in the subnet route table where these Transit Gateway ENIs are present. You add a route in subnet route tables pointing all traffic towards the respective NAT gateway in the same Availability Zone to minimize cross-Availability Zone (AZ) traffic. The NAT gateway subnet route table has internet gateway (IGW) as the next hop. For return traffic to ﬂow back, you must add a static route table entry in the NAT gateway subnet route table pointing all spoke VPC bound traffic to Transit Gateway as the next hop.

## High availability
<a name="HA-1"></a>

 For high availability, you should use more than one NAT gateway (one in each Availability Zone). If a NAT gateway is unavailable, traffic might be dropped in that Availability Zone that is traversing the affected NAT gateway. If one Availability Zone is unavailable, the Transit Gateway endpoint along with NAT gateway in that Availability Zone will fail, and all traffic will flow via the Transit Gateway and NAT gateway endpoints in the other Availability Zone. 

## Security
<a name="Security-1"></a>

You can rely on security groups on the source instances, blackhole routes in the Transit Gateway route tables, and the network ACL of the subnet in which the NAT gateway is located. For example, customers can use ACLs on the NAT Gateway public subnet(s) to allow or block source or destination IP addresses. Alternatively, you can use NAT Gateway with AWS Network Firewall for centralized egress described in the next section to meet this requirement.

## Scalability
<a name="Scalability-1"></a>

A single NAT gateway can support up to 55,000 simultaneous connections per assigned IP address to each unique destination. You can request a quota adjustment to allow up to eight assigned IP addresses, allowing for 440,000 simultaneous connections to a single destination IP and port. NAT gateway provides 5 Gbps of bandwidth and automatically scales to 100 Gbps. Transit Gateway generally does not act as a load balancer and would not distribute your traffic evenly across NAT gateways in the multiple Availability Zones. The traffic across the Transit Gateway will stay within an Availability Zone, if possible. If the Amazon EC2 instance initiating traffic is in Availability Zone 1, traffic will ﬂow out of the Transit Gateway elastic network interface in the same Availability Zone 1 in the egress VPC and will ﬂow to the next hop based on that subnet route table that elastic network interface resides in. For a complete list of rules, refer to [NAT gateways](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html) in the Amazon Virtual Private Cloud documentation.

 For more information, refer to the [Creating a single internet exit point from multiple VPCs Using AWS Transit Gateway](https://aws.amazon.com/blogs/networking-and-content-delivery/creating-a-single-internet-exit-point-from-multiple-vpcs-using-aws-transit-gateway/) blog post. 

# Using the NAT gateway with AWS Network Firewall for centralized IPv4 egress
<a name="using-nat-gateway-with-firewall"></a>

If you want to inspect and filter your outbound traffic, you can incorporate AWS Network Firewall with NAT gateway in your centralized egress architecture. AWS Network Firewall is a managed service that makes it easy to deploy essential network protections for all of your VPCs. It provides control and visibility to Layer 3-7 network traffic for your entire VPC. You can do URL/domain name, IP address, and content-based outbound traffic filtering to stop possible data loss, help meet compliance requirements, and block known malware communications. AWS Network Firewall supports thousands of rules that can filter out network traffic destined for known bad IP addresses or bad domain names. You can also use Suricata IPS rules as part of the AWS Network Firewall service by importing open-source rulesets or authoring your own Intrusion Prevention System (IPS) rules using Suricata rule syntax. AWS Network Firewall also allows you to import compatible rules sourced from AWS partners. 

In the centralized egress architecture with inspection, the AWS Network Firewall endpoint is a default route table target in the transit gateway attachments subnet route table for the egress VPC. Traffic between spoke VPCs and the internet is inspected using AWS Network Firewall as shown in the following diagram.

![\[A diagram depicting centralized egress with AWS Network Firewall and NAT gateway (route table design)\]](http://docs.aws.amazon.com/whitepapers/latest/building-scalable-secure-multi-vpc-network-infrastructure/images/centralized-egress-rt.png)


For centralized deployment model with Transit Gateway, AWS recommends deploying AWS Network Firewall endpoints in multiple Availability Zones. There should be one firewall endpoint in each Availability Zone the customer is running workloads in, as shown in the preceding diagram. As a best practice, the firewall subnet should not contain any other traffic because AWS Network Firewall is not able to inspect traffic from sources or destinations within a firewall subnet.

Similar to the previous setup, spoke VPC attachments are associated with Route Table 1 (RT1) and are propagated to Route Table 2 (RT2). A Blackhole route is explicitly added to disallow the two VPCs from communicating with each other.

Continue to use a default route in RT1 pointing all traffic to egress VPC. Transit Gateway will forward all traffic flows to one of the two availability zones in the egress VPC. Once traffic reaches one of Transit Gateway ENIs in the egress VPC, you hit a default route which will forward traffic to one of the AWS Network Firewall endpoints in their respective availability zone. AWS Network Firewall will then inspect traffic based on the rules you set before forwarding traffic to the NAT gateway using a default route.

This case doesn’t require Transit Gateway appliance mode, because you aren’t sending traffic between attachments. 

**Note**  
AWS Network Firewall does not perform network address translation for you, this function would be handled by the NAT gateway after traffic inspection through the AWS Network Firewall. Ingress routing is not required in this case as return traffic will be forwarded to the NATGW IPs by default. 

Because you are using a Transit Gateway, here we can place the firewall prior to NAT gateway. In this model, the firewall can see the source IP behind the Transit Gateway. 

If you were doing this in a single VPC, we can use the VPC routing enhancements that allow you to inspect traffic between subnets in the same VPC. For details, refer to the [Deployment models for AWS Network Firewall with VPC routing enhancements](https://aws.amazon.com/blogs/networking-and-content-delivery/deployment-models-for-aws-network-firewall-with-vpc-routing-enhancements/) blog post.

## Scalability
<a name="scalability-2"></a>

AWS Network Firewall can automatically scale firewall capacity up or down based on traffic load to maintain steady, predictable performance to minimize costs. AWS Network Firewall is designed to support tens of thousands of firewall rules and can scale up to 100 Gbps throughput per Availability Zone.

## Key considerations
<a name="key-considerations-1"></a>
+ Each firewall endpoint can handle about 100 Gbps of traffic, if you require higher burst or sustained throughput, contact [AWS support](https://docs.aws.amazon.com/awssupport/latest/user/getting-started.html).
+ If you choose to create a NAT gateway in your AWS account along with Network Firewall, standard NAT gateway processing and per-hour usage [charges](https://aws.amazon.com/network-firewall/pricing/) are waived on a one-to-one basis with the processing per GB and usage hours charged for your firewall.
+ You can also consider distributed firewall endpoints through AWS Firewall Manager without a Transit Gateway.
+ Test firewall rules before moving them to production, similar to a network access control list, as order matters.
+ Advanced Suricata rules are required for deeper inspection. Network firewall supports encrypted traffic inspection for ingress as well as egress traffic.
+ The `HOME_NET` rule group variable defined the source IP range eligible for processing in the Stateful engine. Using a centralized approached, you must add all additional VPC CIDRs attached to the Transit Gateway to make them eligible for processing. Refer to the [Network Firewall documentation](https://docs.aws.amazon.com/network-firewall/latest/developerguide/stateful-rule-groups-domain-names.html) for more details on the `HOME_NET` rule group variable.
+ Consider deploying Transit Gateway and egress VPC in a separate Network Services account to segregate access based on delegation of duties; for example, only network administrators can access the Network Services account.
+  To simplify deployment and management of AWS Network Firewall in this model, AWS Firewall Manager can be used. Firewall Manager allows you to centrally administer your different firewalls by automatically applying protection you create in the centralized location to multiple accounts. Firewall Manager supports both distributed and centralized deployment models for Network Firewall. To learn more, see the blog post [How to deploy AWS Network Firewall by using AWS Firewall Manager](https://aws.amazon.com/blogs/security/how-to-deploy-aws-network-firewall-by-using-aws-firewall-manager/). 

# Using the NAT gateway and Gateway Load Balancer with Amazon EC2 instances for centralized IPv4 egress
<a name="using-nat-gateway-and-gwlb-with-ec2"></a>

Using a software-based virtual appliance (on Amazon EC2) from AWS Marketplace and AWS Partner Network as an exit point is similar to the NAT gateway setup. This option can be used if you want to use the advanced layer 7 ﬁrewall/Intrusion Prevention/Detection System (IPS/IDS) and deep packet inspection capabilities of the various vendor offerings.

In the following figure, in addition to NAT gateway, you deploy virtual appliances using EC2 instances behind a Gateway Load Balancer (GWLB). In this setup, the GWLB, Gateway Load Balancer Endpoint (GWLBE), virtual appliances and NAT gateways are deployed in a centralized VPC which is connected to Transit Gateway using VPC attachment. The spoke VPCs are also connected to the Transit Gateway using a VPC Attachment. Because GWLBEs are a routable target, you can route traffic moving to and from Transit Gateway to the fleet of virtual appliances that are configured as targets behind a GWLB. GWLB acts as a bump-in-the-wire and transparently passes all Layer 3 traffic through third-party virtual appliances, and thus invisible to the source and destination of the traffic. Therefore, this architecture allows you to centrally inspect all of your egress traffic traversing through Transit Gateway. 

For more information on how the traffic flows from the applications in the VPCs to the internet and back through this setup, refer to [Centralized inspection architecture with AWS Gateway Load Balancer and AWS Transit Gateway](https://aws.amazon.com/blogs/networking-and-content-delivery/centralized-inspection-architecture-with-aws-gateway-load-balancer-and-aws-transit-gateway/).

You can enable appliance mode on Transit Gateway to maintain flow symmetry through virtual appliances. This means the bidirectional traffic is routed through the same appliance and the Availability Zone for the life of the flow. This setting is particularly important for stateful firewalls performing deep packet inspection. Enabling appliance mode removes the need for complex workarounds, such as source network address translation (SNAT), to force traffic to return to the correct appliance to maintain symmetry. Refer to [Best practices for deploying Gateway Load Balancer](https://aws.amazon.com/blogs/networking-and-content-delivery/best-practices-for-deploying-gateway-load-balancer/) for details.

It is also possible to deploy GWLB endpoints in a distributed manner without Transit Gateway to enable egress inspection. Learn more about this architectural pattern in the [Introducing AWS Gateway Load Balancer: Supported architecture patterns](https://aws.amazon.com/blogs/networking-and-content-delivery/introducing-aws-gateway-load-balancer-supported-architecture-patterns/) blog post.

![\[A diagram depicting Centralized egress with Gateway Load Balancer and EC2 instance (route table design)\]](http://docs.aws.amazon.com/whitepapers/latest/building-scalable-secure-multi-vpc-network-infrastructure/images/centralized-egress-gwlb-and-ec2.png)


## High availability
<a name="high-availabilty-2"></a>

AWS recommends deploying Gateway Load Balancers and virtual appliances in multiple Availability Zones for higher availability.

Gateway Load Balancer can perform health checks to detect virtual appliance failures. In case of an unhealthy appliance, GWLB reroutes the new flows to healthy appliances. Existing flows always go to the same target regardless of health status of target. This allows for connection draining and accommodate health check failures due to CPU spikes on appliances. For more details, refer to section 4: Understand appliance and Availability Zone failure scenarios in the blog post [Best practices for deploying Gateway Load Balancer](https://aws.amazon.com/blogs/networking-and-content-delivery/best-practices-for-deploying-gateway-load-balancer/). Gateway Load Balancer can use auto scaling groups as targets. This benefit takes out the heavy lifting of managing availability and scalability of the appliance fleets.

## Advantages
<a name="advantages"></a>

Gateway Load Balancer and Gateway Load Balancer endpoints are powered by AWS PrivateLink, which allows for the exchange of traffic across VPC boundaries securely without the need to traverse the public internet.

Gateway Load Balancer is a managed service that removes undifferentiated heavy lifting of managing, deploying, scaling virtual security appliances so that you can focus on the things that matter. Gateway Load Balancer can expose the stack of firewalls as an endpoint service for customers to subscribe to using the [AWS Marketplace](https://aws.amazon.com/marketplace). This is called Firewall as a Service (FWaaS); it introduces a simplified deployment and removes the need to rely on BGP and ECMP to distribute traffic across multiple Amazon EC2 instances. 

## Key considerations
<a name="key-considerations-2"></a>
+ The appliances need to support [Geneve](https://datatracker.ietf.org/doc/html/rfc8926) encapsulation protocol to integrate with GWLB. 
+ Some third-party appliances can support SNAT and overlay routing ([two-arm mode](https://networkgeekstuff.com/networking/basic-load-balancer-scenarios-explained/)) therefore eliminating the need to create NAT gateways for saving costs. However, consult with an AWS partner of your choice before using this mode as this is dependent on vendor support and implementation.
+  Take a note of [GWLB idle timeout](https://docs.aws.amazon.com/elasticloadbalancing/latest/gateway/gateway-load-balancers.html#idle-timeout). This can result in connection timeouts on clients. You can tune your timeouts on client, server, ﬁrewall, and OS level to avoid this. Refer to *Section 1: Tune TCP keep-alive or timeout values to support long-lived TCP ﬂows* in the [Best practices for deploying Gateway Load Balancer](https://aws.amazon.com/blogs/networking-and-content-delivery/best-practices-for-deploying-gateway-load-balancer/) blog post for more information. 
+ GWLBE are powered by AWS PrivateLink, so AWS PrivateLink charges will be applicable. You can learn more in the [AWS PrivateLink pricing page](https://aws.amazon.com/privatelink/pricing/). If you are using the centralized model with Transit Gateway, the TGW data processing charges will be applicable. 
+ Consider deploying Transit Gateway and egress VPC in a separate Network Services account to segregate access based on delegation of duties, such as only network administrators can access Network Services Account.

# Centralized egress for IPv6
<a name="centralized-egress-for-ipv6"></a>

 To support IPv6 egress in dual stack deployments that have centralized IPv4 egress, one of two patterns must be chosen: 
+  Centralized IPv4 egress with decentralized IPv6 egress 
+  Centralized IPv4 egress and centralized IPv6 egress 

 In the first pattern, shown in the following diagram, egress-only internet gateways are deployed in each spoke VPC. Egress-only internet gateways are horizontally scaled, redundantly, and highly available gateways that allow outbound communication over IPv6 from instances inside your VPC. They prevent the internet from initiating IPv6 connections with your instances. Egress-only internet gateways have no charge. In this deployment model, IPv6 traffic flows out of the egress-only internet gateways in each VPC and IPv4 traffic flows over the centralized NAT Gateways deployed. 

![\[A diagram depicting centralized IPV4 egress and decentralized outbound only IPv6 egress.\]](http://docs.aws.amazon.com/whitepapers/latest/building-scalable-secure-multi-vpc-network-infrastructure/images/centralized-ipv4-egress-and-decentralized-outbound-ipv6.png)


 In the second pattern, shown in the following diagrams, egress IPv6 traffic from your instances is sent to a centralized VPC. This can be accomplished by using IPv6-to-IPv6 Network Prefix Translation (NPTv6) with NAT66 instances and NAT Gateways or by using Proxy Instances and Network Load Balancer. This pattern is applicable if centralized traffic inspection for outbound traffic is required and it cannot be performed in each spoke VPC. 

![\[A diagram depicting centralized IPv6 egress using NAT gateways and NAT66 instances.\]](http://docs.aws.amazon.com/whitepapers/latest/building-scalable-secure-multi-vpc-network-infrastructure/images/centralized-ipv6-egress-using-nat-gateways.png)


![\[A diagram depicting centralized IPv4 and IPv6 egress using proxy instances and Network Load Balancer.\]](http://docs.aws.amazon.com/whitepapers/latest/building-scalable-secure-multi-vpc-network-infrastructure/images/centralized-ipv4-and-ipv6-egress.png)


 The [IPv6 on AWS whitepaper](https://docs.aws.amazon.com/whitepapers/latest/ipv6-on-aws/advanced-dual-stack-and-ipv6-only-network-designs.html) describes the centralized IPv6 egress patterns. The IPv6 egress patterns are discussed in more detail in the blog [Centralized outbound internet traffic for dual stack IPv4 and IPv6 VPCs](https://aws.amazon.com/blogs/networking-and-content-delivery/centralizing-outbound-internet-traffic-for-dual-stack-ipv4-and-ipv6-vpcs/), along with special considerations, sample solutions, and diagrams. 