Networking & Content Delivery

Centralizing outbound Internet traffic for dual stack IPv4 and IPv6 VPCs

Organizations have been adopting IPv6 in their IPv4 environments to solve IP address exhaustion or meet compliance requirements. Since IPv6 isn’t backward compatible with IPv4, several mechanisms can facilitate communication between hosts that support one or both protocols. One common way is by using dual stack deployments. For architectures where dual stack deployments aren’t the first choice/aren’t scalable, such as with IPv6-only hosts in Amazon Virtual Private Cloud (Amazon VPCs) that must communicate with IPv4-only services, NAT64 and DNS64 is another option. You can learn more about the process of NAT64 and DNS64 here, and about dual stack load balancers and architectures here.

Depending on the network management and operational model, you can choose to adopt a centralized deployment for your dual stack VPCs to simplify operations, reduce cost, or enforce security controls on outbound traffic from multiple VPCs. Having an interoperability layer in a VPC configured to provide centralized Internet access solves such a problem.

In this post, we’ll explore three different architecture patterns that you can use to centralize Internet outbound traffic for dual stack VPCs, from IPv6 hosts in your dual stack VPCs to IPv4 and/or IPv6 destinations. Furthermore, this blog will address how to preserve VPC isolation and enable Internet connectivity for applications running that are using either AWS-assigned IPv6 addresses, or your IPv6 address space, configured using Bring Your Own IP (BYOIP) which you may or may not publicly advertise in AWS. For more information about the VPC addressing modes, check the documentation.

Prerequisites and assumptions

This is a 300-level post so it assumes that you’re familiar with fundamental networking constructs, such as IPv4, IPv6, Network Address Translation (NAT), and networking constructs on AWS. These include Amazon VPC, Subnets, NAT Gateway, Routing Tables, Amazon Elastic Compute Cloud (Amazon EC2), and AWS Transit Gateway. This blog will not focus on defining all of these services in detail, but will outline their capabilities and how to use them in the provided design example.

Solution overview

This blog explores three options for centralized IPv6 internet egress mapping to different deployment models, and account for the various needs and constraints of organizations. Let’s review the considerations for each:

A) Centralized IPv4 Egress with Outbound-only Decentralized IPv6 Egress: you can centralize your outbound IPv4 traffic through a NAT gateway in a centralized VPC to reduce cost and simplify your operations using NAT64. When NAT64 is used, you must still maintain outbound IPv6 connectivity for the Spoke VPCs. You can do this using an egress-only internet gateway, which is a highly-available VPC component that allows outbound-only IPv6 traffic to the Internet. Optionally, you can utilize an Internet Gateway for IPv6 if you want bidirectional IPv6 connectivity.

If you want to centralize all of your IPv6 traffic in addition to the outbound IPv4 traffic to enforce security controls for all traffic, then two options you can consider are:

B) Centralized IPv4 and IPv6 Egress using NAT Gateway and NAT66 Instances: with this option, an Amazon EC2-based appliance in a centralized VPC is used to run NAT66. This appliance could also provide firewall functionality. All centralized traffic will go through the appliance, which will translate the IPv6 address of the traffic from the spoke VPCs to the appliance’s own IPv6 address before passing it across to the IPv6 destination. Traffic from IPv6 instances to IPv4-only services can continue to use the NAT Gateway.

C) Centralized IPv4 and IPv6 Egress using Proxy Instances and Network Load Balancer (NLB): if you want to utilize proxies to implement web filters or firewall for all IPv4/IPv6 bound traffic, then this approach can be used. All outbound traffic to IPv4 and IPv6 destinations goes through the centralized proxy Amazon EC2 instances.

Read more about using NAT66 or Proxies in our documentation.

Using Transit Gateway, you can configure a single centralized VPC with multiple NAT gateways, instances, or proxies to consolidate outbound traffic for numerous VPCs. Simultaneously, you can use multiple route tables within Transit Gateway to maintain VPC-to-VPC isolation. This hub-and-spoke design enables you to easily manage all of your outbound internet communication securely from one place. However, it introduces a single point of dependencies. Without Transit Gateway, you must use NAT gateways or NAT instances/proxies for each VPC needing IPv6-IPv4 communication. If you have a significant number of VPCs, then the management of multiple internet gateways, NAT gateways, and EC2 instances increases operational overhead and costs. We discuss the centralized approach in this post, which provides a control mechanism that may meet your requirements. However, you must assess your use case to decide if it’s right for you.

Baseline architecture

First, we show you the network architecture and routing configurations upon which the three solutions are based:

Baseline Architecture for Blog about Centralizing outbound Internet traffic for dual stack IPv4 and IPv6 VPCs

Figure 1: Baseline Architecture

Route Tables for Baseline Architecture of Blog on Centralizing outbound Internet traffic for dual stack IPv4 and IPv6 VPCsFigure 2: Route Tables configuration for the Baseline Architecture

In the first figure, you can find a baseline architecture that will be used to build three patterns for centralized egress. This architecture will provide us with Transit Gateway route table configuration (Figure 2: Table 3a), Transit Gateway attachments route table configuration, and Internet Gateway route table configuration for Egress VPC (Figure 2: Table 1). In addition, baseline architecture provides an EC2 instance for testing purposes.

To deploy your testing environment, follow these step-by-step instructions or create an AWS CloudFormation stack using this template: vpc-base.yaml. For more information, see Creating a stack on the AWS CloudFormation console.

Let’s look at each of the three architecture patterns that you can use to centralize IPv6 internet egress.

A) Centralized IPv4 Egress with Outbound-only Decentralized IPv6 Egress

In this section, we describe how to set up the first solution for implementing centralized Internet outbound connectivity for dual-stack VPCs, and Centralized IPv4 Egress with Outbound-only Decentralized IPv6 Egress. The network architecture and the corresponding routing tables are outlined in the following diagrams:

Centralized IPv4 Egress with Outbound-only Decentralized IPv6 Egress

Figure 3: Centralized IPv4 Egress with Outbound-only Decentralized IPv6 Egress

Route Tables for Centralized IPv4 Egress with Outbound-only Decentralized IPv6 EgressFigure 4: Route Tables configuration for Centralized IPv4 Egress with Outbound-only Decentralized IPv6 Egress

You can deploy this solution by following these step-by-step instructions or by creating two CloudFormation stacks using these templates: nat64.yaml and eigw.yaml. Make sure to specify as value for the NetworkStackName parameter the name of the stack that you’ve created before using this template: vpc-base.yaml. For more information, see Creating a stack on the AWS CloudFormation console and Specifying stack name and parameters.

Considerations

With this approach, the NAT Gateway and egress-only internet Gateway provide high availability and performance for your dual-stack VPCs. IPv4-only traffic will be processed by the Transit Gateway and NAT Gateway. Therefore, you don’t incur any data processing costs for IPv6-only traffic utilizing the egress-only internet gateway, which will be decentralized in the different application VPCs. Since not all of the traffic is centralized, not all of the use cases for centralized architectures will apply here, such as centralized traffic filtering or monitoring. For traffic that isn’t centralized, you can deploy additional decentralized filtering or monitoring solutions in your dual-stack VPCs.

In the next two sections, we show you how to configure centralized egress to IPv6 endpoints, which you can leverage to reach public IPv6 endpoints from VPCs configured with non-advertised IPv6 ranges. We’ll discuss two approaches to achieve this goal: NAT Instances and Proxy Instances.

B) Centralized IPv4 and IPv6 Egress using NAT Gateways and NAT66 Instances

In this section, we describe how to set up the second solution for implementing centralized Internet outbound connectivity for dual-stack VPCs, Centralized IPv4, and IPv6 Egress using NAT Gateways and NAT66 Instances. The network architecture and the corresponding routing tables are outlined in the following diagrams:

Centralized NAT64 and NAT66 Egress using NAT Gateways and NAT66 Instances

Figure 5: Centralized NAT64 and NAT66 Egress using NAT Gateways and NAT66 Instances

Route Tables for Centralized NAT64 and NAT66 Egress using NAT Gateways and NAT66 InstancesFigure 6: Route Tables configuration for Centralized NAT64 and NAT66 Egress using NAT Gateways and NAT66 Instances

Since we’re using NAT instances for this example, we must provide route configuration to each Availability Zone (AZ) individually (Figure 6: tables 2a and 2b). This will allow us to deploy the NAT solution in high available mode.

You can deploy this solution in your test environment by following these step-by-step instructions, or by creating two CloudFormation stacks using these templates: nat64.yaml and nat66.yaml. Make sure to specify as value for the NetworkStackName parameter the name of the stack that you’ve created before using this template: vpc-base.yaml. For more information, see Creating a stack on the AWS CloudFormation console and Specifying stack name and parameters.

Considerations

With the NAT66 Instances approach, the applications deployed in the Application VPCs won’t require any modification to connect to public IPv6 endpoints. The NAT instances act as a transparent proxy for your workloads, which will be allowed to connect to public IPv6 endpoints seamlessly.

However, because NAT66 Instances aren’t a managed service as opposed to NAT Gateways (that currently don’t support NAT66), you must manage some key aspects, such as availability and scalability, to make sure of business continuity for your workloads. For more information, see Compare NAT gateways and NAT instances.

C) Centralized IPv4 and IPv6 Egress using Proxy Instances and NLB

In this section, we describe how to setup the third solution for implementing centralized Internet outbound connectivity for dual-stack VPCs, and Centralized IPv4 and IPv6 Egress using Proxy Instances and NLB. The network architecture and the corresponding routing tables are outlined in the following diagrams:

Centralized IPv4 and IPv6 Egress using Proxy Instances and NLB

Figure 7: Centralized IPv4 and IPv6 Egress using Proxy Instances and NLB

Route Tables for Centralized IPv4 and IPv6 Egress using Proxy Instances and NLBFigure 8: Route Tables configuration for Centralized IPv4 and IPv6 Egress using Proxy Instances and NLB

For Centralized IPv4 and IPv6 Egress using Proxy Instances and NLB to the Baseline architecture (Figure 1) we will be adding a two proxy servers with NLB (Figure 7). In addition to servers, we must configure the corresponding routes within App VPCs, Engress VPC, and Transit Gateway (Figure 8).

You can deploy this solution by following these step-by-step instructions, or by creating a CloudFormation stack using this template: nat66-proxy.yaml. Make sure to specify as value for the NetworkStackName parameter the name of the stack that you’ve created before using this template: vpc-base.yaml. For more information, see Creating a stack on the AWS CloudFormation console and Specifying stack name and parameters.

Considerations

Adopting the Proxy Instances approach solves the availability and scalability challenges presented by the NAT Instances approach thanks to the usage of an NLB in combination with an Auto Scaling Group.

However, in this case the applications deployed in the Application VPCs will require you to setup the proxy explicitly to connect to public IPv6 endpoints. On the other hand, for private IPv6 endpoints (e.g., VPC endpoints), we recommend avoiding setting the proxy. In this way, only the traffic that must strictly go through the proxy will be routed through it. Furthermore, this will reduce the utilization of the Transit Gateway bandwidth, thereby saving costs and optimizing performance.

Deployment testing

You can use AWS Systems Manager, a bastion host, or any other method to log in to the test instance. To verify that the traffic is routed correctly through the NAT Gateway in the Centralized Egress VPC for the first pattern, we’ll utilize curl for testing. The test must be conducted against a URL that only uses IPv4: ec2-54-171-191-22.eu-west-1.compute.amazonaws.com. The test created is an example public EC2 instance with public IPv4.

In the following output, we receive a synthesized IPv6 address for the domain. Using this IPv6 address, we can successfully connect using curl as shown here:

sh-4.2$ dig AAAA ec2-54-171-191-22.eu-west-1.compute.amazonaws.com
; <<>> DiG 9.11.4-P2-RedHat-9.11.4-26.P2.amzn2.5.2 <<>> aaaa
scratch.mit.edu
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 46569
;; flags: qr rd ra; QUERY: 1, ANSWER: 4, AUTHORITY: 0,
ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;scratch.mit.edu.INAAAA
;; ANSWER SECTION:
ec2-54-171-191-22.eu-west-1.compute.amazonaws.com 20 IN AAAA 64:ff9b::36ab:bf16

;; Query time: 4 msec
;; SERVER: 10.0.0.2#53(10.0.0.2)
;; WHEN: Thu Jul 21 03:33:32 UTC 2022
;; MSG SIZE rcvd: 149

sh-4.2$ telnet ec2-54-171-191-22.eu-west-1.compute.amazonaws.com 443
Trying 64:ff9b::36ab:bf16...
Connected to ec2-54-171-191-22.eu-west-1.compute.amazonaws.com.
Escape character is '^]'.

sh-4.2$ curl -IL http://ec2-54-171-191-22.eu-west-1.compute.amazonaws.com
HTTP/2 200

Date: Wed, 13 Jul 2022 06:21:15 GMT
Server: Apache/2.4.25 (Debian)
Expires: Thu, 19 Nov 1981 08:52:00 GMT
Cache-Control: no-store, no-cache, must-revalidate
Pragma: no-cache
Set-Cookie: PHPSESSID=ekk6oceeq94bvg4vg5q8f2evq0; expires=Wed, 13-Jul-2022 06:32:35 GMT; Max-Age=900; path=/; domain=.ec2-54-171-191-22.eu-west-1.compute.amazonaws.com
Content-Type: text/html; charset=UTF-8

To test centralized egress to IPv6 endpoints, we’ll use a private Amazon Route53 Record “private-ipv6-test.domain” for illustrative purposes.

In the following output, we receive an IPv6 address for the domain:

sh-4.2$ dig AAAA private-ipv6-test.domain

; <<>> DiG 9.11.4-P2-RedHat-9.11.4-26.P2.amzn2.5.2 <<>> AAAA private-ipv6-test.domain
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 15390
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;private-ipv6-test.domain.                 IN      AAAA

;; ANSWER SECTION:
private-ipv6-test.domain.          66      IN      AAAA    2a05:d018:1843:8600:f985:a5b1:5c7:e0aa

;; Query time: 0 msec
;; SERVER: 10.0.0.2#53(10.0.0.2)
;; WHEN: Tue Jul 12 12:54:12 UTC 2022
;; MSG SIZE  rcvd: 70

When using NAT instances as described in the second pattern, we can successfully connect using curl as shown here:

sh-4.2$ curl -6 -I http://private-ipv6-test.domain
HTTP/1.1 200 OK
Date: Wed, 13 Jul 2022 06:17:35 GMT
Server: Apache/2.4.25 (Debian)
Expires: Thu, 19 Nov 1981 08:52:00 GMT
Cache-Control: no-store, no-cache, must-revalidate
Pragma: no-cache
Set-Cookie: PHPSESSID=ekk6oceeq94bvg4vg5q8f2evq0; expires=Wed, 13-Jul-2022 06:32:35 GMT; Max-Age=900; path=/; domain=.private-ipv6-test.domain
Content-Type: text/html; charset=UTF-8

When using proxy instances as shown in the third pattern, we can successfully connect using curl with the –proxy option as shown in the following (the environment variable $PROXY_HOSTNAME must be set to the NLB DNS name created above):

sh-4.2$ curl -6 -I --proxy $PROXY_HOSTNAME:3128 http://private-ipv6-test.domain/
HTTP/1.1 200 Connection established
HTTP/1.1 200 OK
Date: Tue, 12 Jul 2022 13:21:19 GMT
Server: Apache/2.4.25 (Debian)
Expires: Thu, 19 Nov 1981 08:52:00 GMT
Cache-Control: no-store, no-cache, must-revalidate
Pragma: no-cache
Set-Cookie: PHPSESSID=t524d9tvk1u9gds883rn2s2p53; expires=Tue, 12-Jul-2022 13:36:19 GMT; Max-Age=900; path=/; domain=.private-ipv6-test.domain
Content-Type: text/html; charset=UTF-8

Conclusion

This post reviews the various methods of centralizing Internet outbound traffic for dual stack Amazon Virtual Private Cloud (Amazon VPCs), from IPv6 hosts in spoke VPCs to IPv4 and IPv6 destinations, as well as how to preserve VPC isolation and enable Internet connectivity for applications running in non-advertised IPv6 address spaces. By consolidating your outbound traffic, you can manage outbound communications security, scaling, and configuration in one place. Using AWS Transit Gateway, you can configure a single VPC to consolidate outbound traffic for numerous VPCs.

The blog shows you how to deploy the essential components of this design, including the VPCs, subnets, an internet gateway, Transit Gateway, and route tables. It shows you how to deploy a new Transit Gateway, attach it to all three VPCs, and set up routing for connectivity between the VPCs. Finally, it walks you through scenarios using centralized IPv4 egress with decentralized IPv6 egress, centralized IPv4 and IPv6 egress using NAT Gateways and NAT66 instances, and centralized IPv4 and IPv6 egress using proxy instances. Visit the aws-samples Github repository for all CloudFormation Templates provided in this blog. For further information about centralizing Internet outbound traffic with IPv6, refer to the Advanced dual-stack and IPv6-only network designs whitepaper.

Opeoluwa Victor Babasanmi.png

Opeoluwa Victor Babasanmi

Victor is a Sr. Technical Account Manager at AWS. He focuses on providing customers with technical guidance on planning and building solutions using best practices, and proactively keeps their AWS environments operationally healthy. When he is not helping customers, you may find him playing soccer, doing crossfit or looking for a new adventure somewhere.

Andrea-Meroni.jpg

Andrea Meroni

Andrea is a Cloud Infrastructure Architect at Amazon Web Services. He enables customers to develop highly scalable, resilient and secure applications in the AWS cloud. In his spare time, Andrea loves to read, watch horror movies and hike.

Andrea-Meroni.jpg

Oleksandr Pogorielov

Oleksandr is Sr. DevSecOps Architect in AWS ProServe with more than 10 years of experience in virtualization, cloud technologies and automation, and 3 years in automotive industry.

ClaudiaIzquierdo.png

Claudia Izquierdo

Claudia Izquierdo is an AWS Solution Architect for the Public Sector and has helped multiple government entities and non-governmental organizations in Latin America to fulfill their business missions and objectives. She is passionate about the topics of cloud, networking, security and cybersecurity, as well as being a globally awarded Cisco instructor who supports projects for more women in technology.

Scott-Sunday.jpg

Scott Sunday

Scott is a Senior Technical Curriculum Developer at AWS. He works with Dedicated Cloud customers and service teams to build training courses that help customers as they migrate to the cloud and use AWS services. Prior to AWS, Scott designed and implemented secure managed networks and developed and led technical training for the Department of Defense (DoD).

(April 25, 2024) Update – The images in this post were updated to correct errors in IPv6 addresses.