

# VPC to VPC connectivity
<a name="vpc-to-vpc-connectivity"></a>

Customers can use two different VPC connectivity patterns to set up multi-VPC environments: *many to many*, or *hub and spoke*. In the many-to-many approach, the traffic between each VPC is managed individually between each VPC. In the hub-and-spoke model, all inter-VPC traffic flows through a central resource, which routes traffic based on established rules. 

# VPC peering
<a name="vpc-peering"></a>

The first way to connect two VPCs is to use VPC peering. In this setup, a connection enables full bidirectional connectivity between the VPCs. This peering connection is used to route traffic between the VPCs. VPCs in different accounts and AWS Regions can also be peered together. All data transfer over a VPC peering connection that stays within an Availability Zone is free. All data transfer over a VPC peering connection that crosses Availability Zones is charged at the standard in-region data transfer rates. If the VPCs are peered across Regions, standard inter-Region data transfer charges will apply.

 VPC peering is point-to-point connectivity, and it does not support [transitive routing](https://docs.aws.amazon.com/vpc/latest/peering/invalid-peering-configurations.html#transitive-peering). For example, if you have a [VPC peering](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-peering.html) connection between VPC A and VPC B and between VPC A and VPC C, an instance in VPC B cannot transit through VPC A to reach VPC C. To route packets between VPC B and VPC C, you are required to create a direct VPC peering connection. 

At scale, when you have tens or hundreds of VPCs, interconnecting them with peering can result in a mesh of hundreds or thousands of peering connections. A large number of connections can be difficult to manage and scale. For example, if you have 100 VPCs and you want to setup a full mesh peering between them, it will take 4,950 peering connections [`n(n-1)/2`] where `n` is the total number of VPCs. There is a [maximum limit](https://docs.aws.amazon.com/vpc/latest/userguide/amazon-vpc-limits.html) of 125 active peering connections per VPC.

![\[A diagram depicting network setup using VPC peering\]](http://docs.aws.amazon.com/whitepapers/latest/building-scalable-secure-multi-vpc-network-infrastructure/images/network-setup-vpc-peering.png)


If you are using VPC peering, on-premises connectivity (VPN and/or Direct Connect) must be made to each VPC. Resources in a VPC cannot reach on-premises using the hybrid connectivity of a peered VPC, as shown in the preceding figure. 

 VPC peering is best used when resources in one VPC must communicate with resources in another VPC, the environment of both VPCs is controlled and secured, and the number of VPCs to be connected is less than 10 (to allow for the individual management of each connection). VPC peering offers the lowest overall cost and highest aggregate performance when compared to other options for inter-VPC connectivity. 

# AWS Transit Gateway 
<a name="transit-gateway"></a>

 [AWS Transit Gateway](https://docs.aws.amazon.com/vpc/latest/tgw/what-is-transit-gateway.html) provides a hub and spoke design for connecting VPCs and on-premises networks as a fully managed service without requiring you to provision third-party virtual appliances. No VPN overlay is required, and AWS manages high availability and scalability. 

 Transit Gateway enables customers to connect thousands of VPCs. You can attach all your hybrid connectivity (VPN and Direct Connect connections) to a single gateway, consolidating and controlling your organization's entire AWS routing configuration in one place (refer to the following figure). Transit Gateway controls how traffic is routed among all the connected spoke networks using route tables. This hub-and-spoke model simplifies management and reduces operational costs because VPCs only connect to the Transit Gateway instance to gain access to the connected networks. 

![\[A diagram depicting hub and spoke design with AWS Transit Gateway\]](http://docs.aws.amazon.com/whitepapers/latest/building-scalable-secure-multi-vpc-network-infrastructure/images/hub-and-spoke-design.png)


Transit Gateway is a Regional resource and can connect thousands of VPCs within the same AWS Region. You can connect multiple gateways over a single Direct Connect connection for hybrid connectivity. Typically, you can use just one Transit Gateway instance connecting all your VPC instances in a given Region, and use Transit Gateway routing tables to isolate them wherever needed. Note that you do not need additional transit gateways for high availability, because transit gateways are highly available by design; for redundancy, use a single gateway in each Region. However, there is a valid case for creating multiple gateways to limit misconﬁguration blast radius, segregate control plane operations, and administrative ease-of-use.

With Transit Gateway peering, customers can peer their Transit Gateway instances within same or multiple Regions and route traffic between them. It uses the same underlying infrastructure as VPC peering, and is therefore encrypted. For more information, refer to [Building a global network using AWS Transit Gateway Inter-Region peering](https://aws.amazon.com/blogs/networking-and-content-delivery/building-a-global-network-using-aws-transit-gateway-inter-region-peering/) and [AWS Transit Gateway now supports Intra-Region Peering](https://aws.amazon.com/blogs/networking-and-content-delivery/aws-transit-gateway-now-supports-intra-region-peering/).

 Place your organization’s Transit Gateway instance in its Network Services account. This enables centralized management by network engineers who manage the Network services account. Use AWS Resource Access Manager (RAM) to share a Transit Gateway instance for connecting VPCs across multiple accounts in your AWS Organization within the same Region. AWS RAM enables you to easily and securely share AWS resources with any AWS account, or within your AWS Organization. For more information, refer to the [Automating AWS Transit Gateway attachments to a transit gateway in a central account](https://aws.amazon.com/blogs/networking-and-content-delivery/automating-aws-transit-gateway-attachments-to-a-transit-gateway-in-a-central-account/) blog post. 

Transit Gateway also allows you to establish connectivity between SD-WAN infrastructure and AWS using Transit Gateway Connect. Use a Transit Gateway Connect attachment with Border Gateway Protocol (BGP) for dynamic routing and Generic Routing Encapsulation (GRE) tunnel protocol for high performance, delivering up to 20 Gbps total bandwidth per Connect attachment (up to four Transit Gateway Connect peers per Connect attachment). By using Transit Gateway Connect, you can integrate both on-premises SD-WAN infrastructure or SD-WAN appliances running in the cloud through a VPC attachment or Direct Connect attachment as the underlying transport layer. Refer to [Simplify SD-WAN connectivity with AWS Transit Gateway Connect](https://aws.amazon.com/blogs/networking-and-content-delivery/simplify-sd-wan-connectivity-with-aws-transit-gateway-connect/) for reference architectures and detailed configuration. 

# Transit VPC solution
<a name="transit-vpc-solution"></a>

 [Transit VPCs](https://docs.aws.amazon.com/whitepapers/latest/aws-vpc-connectivity-options/transit-vpc-option.html) can create connectivity between VPCs by a different means than VPC peering by introducing a hub and spoke design for inter-VPC connectivity. In a transit VPC network, one central VPC (the hub VPC) connects with every other VPC (spoke VPC) through a VPN connection typically leveraging BGP over [IPsec](https://en.wikipedia.org/wiki/IPsec). The central VPC contains [Amazon Elastic Compute Cloud](https://aws.amazon.com/ec2/) (Amazon EC2) instances running software appliances that route incoming traffic to their destinations using the VPN overlay. Transit VPC peering has the following advantages: 
+  Transitive routing is enabled using the overlay VPN network — allowing for a hub and spoke design. 
+  When using third-party vendor software on the EC2 instance in the hub transit VPC, vendor functionality around advanced security (layer 7 firewall/Intrusion Prevention System (IPS)/Intrusion Detection System (IDS) ) can be used. If customers are using the same software on-premises, they benefit from a unified operational/monitoring experience. 
+ The Transit VPC architecture enables connectivity that may be desired in some use-cases. For example, you can connect an AWS GovCloud instance and Commercial Region VPC or a Transit Gateway instance to a Transit VPC and enable inter-VPC connectivity between the two Regions. Evaluate your security and compliance requirements when considering this option. For additional security, you may deploy a centralized inspection model using design patterns described later in this whitepaper. 

![\[A diagram depicting a transit VPC with virtual appliances\]](http://docs.aws.amazon.com/whitepapers/latest/building-scalable-secure-multi-vpc-network-infrastructure/images/transit-vpc-virtual-appliances.png)


Transit VPC comes with its own challenges, such as higher costs for running third-party vendor virtual appliances on EC2 based on the instance size/family, limited throughput per VPN connection (up to 1.25 Gbps per VPN tunnel), and additional conﬁguration, management and resiliency overhead (customers are responsible for managing the HA and redundancy of EC2 instances running the third-party vendor virtual appliances).

## VPC peering vs. Transit VPC vs. Transit Gateway
<a name="peering-vs"></a>

*Table 1 — Connectivity comparison*


| Criteria  | VPC peering  | Transit VPC | Transit Gateway | PrivateLink | Cloud WAN | VPC Lattice | 
| --- | --- | --- | --- | --- | --- | --- | 
|  Scope   | Regional/Global | Regional  | Regional  | Regional | Global | Regional | 
| Architecture | Full mesh | VPN-based hub-and-spoke | Attachments-based hub-and-spoke | Provider or Consumer Model | Attachments based, multi-region | App to App connectivity | 
|  Scale   | 125 active Peers/VPC  | Depends on virtual router/EC2  | 5000 attachments per Region  | No limits | 5000 attachments per core network | 500 VPC associations per service | 
|  Segmentation   | Security groups  | Customer managed  | Transit Gateway route tables  | No segmentation | Segments | Servie and service network policies | 
|  Latency   | Lowest  | Extra, due to VPN encryption overhead  | Additional Transit Gateway hop  | Traffic stays on AWS backbone, customers should test | Uses the same dataplane as Transit Gateway | Traffic stays on AWS backbone, customers should test | 
|  Bandwidth limit   | Per instance limits, no aggregate limit  | Subject to EC2 instance bandwidth limits based on size/family  | Up to 100 Gbps (burst)/attachment  | 10 Gbps per Availability Zone, automatically scales up to 100 Gbps | Up to 100 Gbps (burst)/attachment | 10 Gbps per Availability Zone | 
|  Visibility   | VPC Flow Logs  | VPC Flow Logs and CloudWatch Metrics  | Transit Gateway Network Manager, VPC Flow Logs, CloudWatch Metrics  | CloudWatch Metrics  | Network Manager, VPC Flow Logs, CloudWatch Metrics  | CloudWatch Access Logs | 
|  Security group  cross-referencing   | Supported  | Not supported  | Not supported  | Not supported | Not supported | Not applicable | 
| IPv6 support  | Supported | Depends on virtual appliance  | Supported | Supported | Supported | Supported | 

# AWS PrivateLink
<a name="aws-privatelink"></a>

[AWS PrivateLink](https://aws.amazon.com/privatelink/) provides private connectivity between VPCs, AWS services, and your on-premises networks without exposing your traffic to the public internet. Interface VPC endpoints, powered by AWS PrivateLink, make it easy to connect to AWS and other services across diﬀerent accounts and VPCs to signiﬁcantly simplify your network architecture. This allows customers who may want to privately expose a service/application residing in one VPC (service provider) to other VPCs (consumer) within an AWS Region in a way that only consumer VPCs initiate connections to the service provider VPC. An example of this is the ability for your private applications to access service provider APIs.

 To use AWS PrivateLink, create a Network Load Balancer for your application in your VPC, and create a VPC endpoint service configuration pointing to that load balancer. A service consumer then creates an interface endpoint to your service. This creates an elastic network interface (ENI) in the consumer subnet with a private IP address that serves as an entry point for traffic destined for the service. The consumer and service are not required to be in the same VPC. If the VPC is different, the consumer and service provider VPCs can have overlapping IP address ranges. In addition to creating the interface VPC endpoint to access services in other VPCs, you can create interface VPC endpoints to privately access [supported AWS services](https://docs.aws.amazon.com/vpc/latest/userguide/vpce-interface.html) through AWS PrivateLink, as shown in the following figure. 

With Application Load Balancer (ALB) as target of NLB, you can now combine ALB advanced routing capabilities with AWS PrivateLink. Refer to [Application Load Balancer-type Target Group for Network Load Balancer](https://aws.amazon.com/blogs/networking-and-content-delivery/application-load-balancer-type-target-group-for-network-load-balancer/) for reference architectures and detailed configuration.

![\[A diagram depicting AWS PrivateLink for connectivity to other VPCs and AWS Services\]](http://docs.aws.amazon.com/whitepapers/latest/building-scalable-secure-multi-vpc-network-infrastructure/images/aws-privatelink.png)


 The choice between Transit Gateway, VPC peering, and AWS PrivateLink is dependent on connectivity. 
+  **AWS PrivateLink** — Use AWS PrivateLink when you have a client/server set up where you want to allow one or more consumer VPCs unidirectional access to a specific service or set of instances in the service provider VPC or certain AWS services. Only the clients with access in the consumer VPC can initiate a connection to the service in the service provider VPC or AWS service. This is also a good option when client and servers in the two VPCs have overlapping IP addresses because AWS PrivateLink uses ENIs within the client VPC in a manner that ensures that are no IP conflicts with the service provider. You can access AWS PrivateLink endpoints over VPC peering, VPN, Transit Gateway, Cloud WAN, and AWS Direct Connect. 
+  **VPC peering and Transit Gateway** — Use VPC peering and Transit Gateway when you want to enable layer-3 IP connectivity between VPCs. 

  Your architecture will contain a mix of these technologies in order to fulfill different use cases. All of these services can be combined and operated with each other. For example, AWS PrivateLink handling API style client-server connectivity, VPC peering for handling direct connectivity requirements where placement groups may still be desired within the Region or inter-Region connectivity is needed, and Transit Gateway to simplify connectivity of VPCs at scale as well as edge consolidation for hybrid connectivity. 

# VPC sharing
<a name="amazon-vpc-sharing"></a>

Sharing VPCs is useful when network isolation between teams does not need to be strictly managed by the VPC owner, but the account level users and permissions must be. With [Shared VPC](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-sharing.html), multiple AWS accounts create their application resources (such as Amazon EC2 instances) in shared, centrally managed Amazon VPCs. In this model, the account that owns the VPC (owner) shares one or more subnets with other accounts (participants). After a subnet is shared, the participants can view, create, modify, and delete their application resources in the subnets shared with them. Participants cannot view, modify, or delete resources that belong to other participants or the VPC owner. Security between resources in shared VPCs is managed using security groups, network access control lists (NACLs), or through a firewall between the subnets.

 VPC sharing benefits: 
+  Simplified design — no complexity around inter-VPC connectivity 
+  Fewer managed VPCs 
+  Segregation of duties between network teams and application owners 
+  Better IPv4 address utilization 
+  Lower costs — no data transfer charges between instances belonging to different accounts within the same Availability Zone 

**Note**  
 When you share a subnet with multiple accounts, your participants should have some level of cooperation since they're sharing IP space and network resources. If necessary, you can choose to share a different subnet for each participant account. One subnet per participant enables network ACL to provide network isolation in addition to security groups. 

 Most customer architectures will contain multiple VPCs, many of which will be shared with two or more accounts. Transit Gateway and VPC peering can be used to connect the shared VPCs. For example, suppose you have 10 applications. Each application requires its own AWS account. The apps can be categorized into two application portfolios (apps within the same portfolio have similar networking requirements, App 1–5 in ‘Marketing’ and App 6–10 in ‘Sales’). 

 You can have one VPC per application portfolio (two VPCs total), and the VPC is shared with the different application owner accounts within that portfolio. App owners deploy apps into their respective shared VPC (in this case, in the different subnets for network route segmentation and isolation using NACLs). The two shared VPCs are connected via the Transit Gateway. With this setup, you could go from having to connect 10 VPCs to just two, as seen in the following figure. 

![\[A diagram depicting an example setup for shared VPC\]](http://docs.aws.amazon.com/whitepapers/latest/building-scalable-secure-multi-vpc-network-infrastructure/images/example-setup-shared-vpc.png)


**Note**  
 VPC sharing participants cannot create all AWS resources in a shared subnet. For more information, refer to the [Limitations](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-sharing.html#vpc-share-limitations) section in the VPC Sharing documentation.   
For more information about the key considerations and best practices for VPC sharing, refer to the [VPC sharing: key considerations and best practices](https://aws.amazon.com/blogs/networking-and-content-delivery/vpc-sharing-key-considerations-and-best-practices/) blog post.

# Private NAT Gateway
<a name="private-nat-gateway"></a>

Teams often work independently and they might create a new VPC for a project, which may have overlapping classless inter-domain routing (CIDR) blocks. For integration, they might want to enable communication between networks with overlapping CIDRs, which is not achievable through features such as VPC peering and Transit Gateway. A private NAT gateway can help with this use case. Private NAT gateway uses a unique private IP address to perform source NAT for the overlapping source IP address, and ELB does the destination NAT for the overlapping destination IP address. You can route traffic from your private NAT gateway to other VPCs or on-premises networks using Transit Gateway or a virtual private gateway.



![\[A diagram depicting an example setup for a private NAT gateway\]](http://docs.aws.amazon.com/whitepapers/latest/building-scalable-secure-multi-vpc-network-infrastructure/images/example-setup-private-nat-gateway.png)


The preceding ﬁgure shows two non-routable (overlapping CIDRs, `100.64.0.0/16`) subnets in VPC A and B. To establish a connection between them, you can add secondary non-overlapping/routable CIDRs (routable subnets, `10.0.1.0/24` and `10.0.2.0/24`) to VPC A and B, respectively. The routable CIDRs should be allocated by the network management team responsible for IP allocation. A private NAT gateway is added to the routable subnet in VPC A with an IP address of `10.0.1.125`. The private NAT gateway performs source network address translation on requests from instances in non-routable subnet of VPC A (`100.64.0.10`) as `10.0.1.125`, the ENI of the private NAT gateway. Now traffic can be pointed to a routable IP address assigned to the Application Load Balancer (ALB) in VPC B (`10.0.2.10`), which has a target of `100.64.0.10`. Traffic is routed through Transit Gateway. Return traffic is processed by the private NAT gateway back to the original Amazon EC2 instance requesting the connection.

The private NAT gateway can also be used when your on-premises network restricts access to approved IPs. The on-premises networks of few customers are required by compliance to communicate only with private networks (no IGW) only through a limited contiguous block of approved IPs owned by the customer. Instead of allocating each instance a separate IP from the block, you can run large workloads on AWS VPCs behind each allow-listed IP using private NAT gateway. For details, refer to the [How to solve Private IP exhaustion with Private NAT Solution](https://aws.amazon.com/blogs/networking-and-content-delivery/how-to-solve-private-ip-exhaustion-with-private-nat-solution/) blog post.

![\[A diagram depicting how to use a private NAT gateway to provide approved IPs for on-premises network\]](http://docs.aws.amazon.com/whitepapers/latest/building-scalable-secure-multi-vpc-network-infrastructure/images/how-to-use-nat.png)


# AWS Cloud WAN
<a name="aws-cloud-wan"></a>

 AWS Cloud WAN is a new way to connect networks together that we were previously able to do with Transit Gateways, VPC Peering, and IPSEC VPN tunnels. Previously you would configure one or more VPCs, connect them together with one of the previous methods, and use IPSEC VPN or Direct Connect to connect to on-premises networks. You would have your network and security posture constructs defined in one place, and your networks in another. Cloud WAN allows you to centralize all of these constructs in a single place. By policy, you can segment your networks to determine who can talk to who, and isolate production traffic via these segments from development or test workloads, or your on premises networks. 

![\[A diagram depicting AWS Cloud WAN connectivity\]](http://docs.aws.amazon.com/whitepapers/latest/building-scalable-secure-multi-vpc-network-infrastructure/images/cloud-wan-diagram.png)


 Manage your global network through the AWS Network Manager user interface and APIs. The global network is the root-level container for all your network objects; the core network is the part of your global network managed by AWS. A core network policy (CNP) is a single versioned policy document that defines all aspects of your core network. Attachments are any connections or resources you wand to add to your core network. A core network edge (CNE) is a local connection point for attachments that comply with the policy. Network segments are routing domains which, by default, allow communication only within a segment. 

 To use CloudWAN: 

1.  In AWS Network Manager, create a global network and associated core network. 

1.  Create a CNP that defines segments, ASN range, AWS Regions and tags to be used to attach to segments. 

1.  Apply the network policy. 

1.  Share the core network with your users, accounts, or organizations using the resource access manager. 

1.  Create and tag attachments. 

1.  Update routes in your attached VPCs to include the core network. 

 Cloud WAN was designed to simplify the process of connecting your AWS infrastructure globally. It allows you to segment traffic with a centralized permissions policy and use your existing infrastructure at your company locations. Cloud WAN also connects your VPCs, SD-WANs, Client VPNs, firewalls, VPNs, and data center resources to connect to Cloud WAN. For more information, see [AWS Cloud WAN blog posts](https://aws.amazon.com/blogs/networking-and-content-delivery/category/networking-content-delivery/aws-cloud-wan/). 

 AWS Cloud WAN enables a unified network connecting cloud and on-premises environments. Organizations use next-gen firewalls (NGFWs) and intrusion prevention systems (IPSs) for security. The [AWS Cloud WAN and Transit Gateway migration and interoperability patterns](https://aws.amazon.com/blogs/networking-and-content-delivery/aws-cloud-wan-and-aws-transit-gateway-migration-and-interoperability-patterns/) blog post describes architectural patterns for centrally managing and inspecting outbound network traffic in a Cloud WAN network, including single-Region and multi-Region networks, and configures route tables. These architectures ensure data and applications remain safe while maintaining a secure cloud environment. 

 For more information about Cloud WAN, see the [Centralized outbound inspection architecture in AWS Cloud WAN](https://aws.amazon.com/blogs/networking-and-content-delivery/centralized-outbound-inspection-architecture-in-aws-cloud-wan/) blog post. 

# Amazon VPC Lattice
<a name="vpc-lattice"></a>

 Amazon VPC Lattice is a fully managed application networking service that is used to connect, monitor, and secure services across various accounts and virtual private clouds. VPC Lattice helps to interconnect services within a logical boundary, so that you can manage and discover them efficiently. 

 VPC Lattice components consists of: 
+  **Service** - This is a unit of application running on an instance, a container, or a Lambda function and consists of listeners, rules and target groups. 
+  **Service network** - This is the logical boundary that is used to automatically implement service discovery and connectivity and apply common access and observability policies to a collection of services. 
+  **Auth policies** - IAM resource policies that can be associated with a service network or individual services to support request-level authentication and context-specific authorization. 
+  **Service Directory** - A centralized view of the services that you own or that have been shared with you through the AWS Resource Access Manager. 

 VPC Lattice usage steps: 

1.  Create the service network. The service network usually resides on a network account where a network administrator has full access. The service network can be shared across multiple accounts within an organization. Sharing can be performed on individual services or the entire service account. 

1.  Attach VPCs to the service network to enable application networking for each VPC, so that different services can start consuming other services that are registered within the network. Security groups are applied to control traffic. 

1.  Developers define the services, which are populated in the service directory and registered into the service network. VPC Lattice contains the address book of all services configured. Developers can also define routing polices to use blue/green deployments. Security is managed at the service network level where authentication and authorization policies are defined and at the service level where access policies with IAM are implemented. 

![\[A diagram depicting VPC Lattice communication flows\]](http://docs.aws.amazon.com/whitepapers/latest/building-scalable-secure-multi-vpc-network-infrastructure/images/vpc-lattice.png)


 More details can be found in the [VPC Lattice user guide](https://docs.aws.amazon.com/vpc-lattice/latest/ug/what-is-vpc-lattice.html ). 