Networking & Content Delivery

AWS Direct Connect expands presence in Australia with 100 Gbps connections and MACsec

AWS Direct Connect makes it easy to establish a dedicated network connection from your premises to AWS. With the launch of a new AWS Direct Connect location in the NextDC S2 Sydney data center, you can now establish dedicated 100 Gbps and encrypted connections with resiliency across two Sydney locations. Equinix SY3, an existing location in Sydney, also supports both MACsec and 100 Gbps connections. And, with the introduction of AWS Direct Connect SiteLink, you can create private network connections between your on-premises locations, such as offices and data centers, connected to Direct Connect locations throughout the world.

In this blog post, we’ll illustrate three of the most common connectivity patterns available in Sydney using the new Direct Connect location, and walk through the high-level steps required to migrate existing Direct Connect connections to higher speed options.

(Note: “Direct Connect” is frequently abbreviated as DX, which you often encounter in technical blogs and presentations.)

Why Sydney?

Any time you reduce the physical distance your data must travel, it lowers latency and eliminates potential points of failure. This new location in Sydney makes it easier to connect your network directly to the AWS network at speeds up to 100 Gbps, and using MACsec encryption if desired.

Why 100 Gbps

Native 100 Gbps connections provide higher bandwidth, without the operational overhead of managing multiple 10 Gbps connections in a Link Aggregation Group (LAG). Previously, in order to get bandwidth greater than 10 Gbps, you had to order multiple DX connections (up to a maximum of 4) and configure them in a LAG. Speeds beyond 40 Gbps required additional routing configuration to split network traffic across multiple LAGs, such as using ECMP on AWS Transit Gateway, or setting local preference community tags per network prefix. Native 100 Gbps connections free up the effort that was required to create LAGs of lower capacity links, reduce the number of physical ports and patching required, and lower the overall operational costs of maintaining high-speed connections.

The increased capacity delivered by 100 Gbps connections is beneficial to applications that transfer large-scale datasets, such as broadcast media distribution, advanced driver assistance systems used for autonomous vehicles, and financial services trading and market information systems.

Today, 100 Gbps connections are only available when using dedicated DX connections. 100 Gbps hosted connections are not currently available from AWS Direct Connect Delivery Partners. (Note: A dedicated connection is made through a 100 Gbps Ethernet port dedicated to a single customer and sourced from AWS. Hosted connections are sourced from AWS Direct Connect Delivery Partners that have an existing network link between themselves and AWS.)

Imagine you are a broadcast media company. A single uncompressed 4K video stream is upwards of 10 Gbps by itself. As a result, you must compress your video streams to send more than one stream over a single 10 Gbps DX connection. With a single 100 Gbps connection, you have the capacity for multiple uncompressed 4K video streams, and given the number of cameras at large sports events, this makes 4K broadcast mixing in the cloud a reality.

High Performance Computing (HPC) is another common use case for 100 Gbps connections. In HPC, you prepare large data sets on premises and analyse them using less expensive elastic compute capacity in the AWS cloud. With the availability of next generation sequencing methods to rapidly sequence entire genomes, the volume of data requiring compute intensive analysis has grown exponentially. Using 100 Gbps connections, genomic sequences are quickly and securely transferred into Amazon S3, ready for consumption by genomics workflows.

Direct Connect or network peering

A common question is, “why should I use DX over public peering when peering with Amazon might be less expensive?”

While these two approaches seem similar, they are operationally very different. Peering is aimed at use cases where the entity peering with AWS is an ISP or a content provider, assumes a level of competence with internet peering, only advertises routes for the local geographical region, and comes with no service level agreements. AWS Direct Connect is a cloud service solution that makes it easy to establish a dedicated network connection from your premises to AWS. Using AWS Direct Connect, you can establish private connectivity between AWS and your data center, office, or colocation environment, which in many cases can reduce your network costs, increase bandwidth throughput, and provide a more consistent network experience than internet-based connections.

This table outlines the tradeoffs between peering versus Direct Connect:

Peering Direct Connect
Strict policies on who can peer Any AWS customer can use
Standard internet egress charges Reduces data transfer costs
No SLA SLA available
No private access to your Amazon VPC, must use a VPN Private access to your VPCs using private or transit VIFs
No guarantee that traffic will use your peering connections Full set of AWS prefixes you can filter down using BGP communities
Best effort support AWS Premium Support available

Use Case – Resilient 100G Direct Connect

With two Direct Connect locations activated for both 100 Gbps and MACsec, you can now build connections with identical capabilities across both sites to achieve secure, high bandwidth, and resilient connectivity. This pattern is eligible for the Direct Connect SLA, we show this in figure 1.

Figure 1: Corporate data center resiliently connected to two VPCs using DX connections from DX locations in NextDC S2 and Equinix SY3

Figure 1: Corporate data center resiliently connected to two VPCs using DX connections from DX locations in NextDC S2 and Equinix SY3

The easiest way to create the necessary Direct Connect connections is using the connection wizard in the Direct Connect console, selecting the High Resiliency level, and then selecting bandwidth, locations for your connections, and in the additional setting section, MACsec support. By using the connection wizard, you’re assured that your links are at the same speed, named consistently, and connected to different devices if you select a resiliency level with multiple connections in the same location. We show this in the following screenshots (figure 2).

Figure 2: Using Connection wizard in the Direct Connect console to order resilient connections

Figure 2: Using Connection wizard in the Direct Connect console to order resilient connections

Then, you must download a Letter of Authorization (LOA) and present it to your colocation provider, along with your request to patch the cross connects. For an introduction to MACsec on Direct Connect, check out the Adding MACsec security to AWS Direct Connect Connections blog post.

Now that you’ve got your connections up, you must create a virtual interface (VIF) to establish IP connectivity and access AWS services. In this case, you are connecting to a Transit Gateway, so you create a transit VIF and Direct Connect Gateway.

You can apply the same level of resiliency when you need to communicate with public endpoints. For example, a media company that stores media in S3 can bypass complexity and reduce cost by using public virtual interfaces to connect directly to S3 public endpoints. We show this pattern in figure 3.

Figure 3: Corporate data center resiliently connected to public AWS endpoints using DX connections from two separate locations and a public VIF

Figure 3: Corporate data center resiliently connected to public AWS endpoints using DX connections from two separate locations and a public VIF

You can combine these two patterns to provide secure, resilient, and high-speed access to your VPCs and public AWS endpoints using the same Direct Connect connections.

Don’t forget to run a failover test before you carry production traffic over your newly created resilient connections! To do this using the Direct Connect console, select your active virtual interface and click “Bring down BGP” from the Actions menu to kick off the process. You can find a detailed description of this process in the Testing AWS Direct Connect Resiliency with Resiliency Toolkit – Failover Testing blog post.

Use Case – SiteLink for connectivity between data centers

If you have multiple on-premises sites that need network connectivity, the new Direct Connect SiteLink feature has opened up the possibility for you to take advantage of the AWS Global Network to create private site to site network links. With SiteLink configured on your private or transit virtual interfaces, you no longer need to maintain additional WAN links between sites. Instead, you can send communications between your sites connected to Direct Connect via AWS. Traffic between SiteLink sites traverses the shortest path on the AWS Global Network, bypassing AWS Regions to improve performance.

For example, if you need to connect your data center in NextDC S2 Sydney to an office location or data center in Perth, you can connect each to their local DX location, and activate SiteLink. This approach allows you to take advantage of AWS’s multiple high speed network links between Sydney and Perth DX locations. And, with additional DX locations in Sydney, Canberra, Melbourne, and overseas, you can expand the reach of your operations throughout Australia and globally without having to purchase and manage costly WAN links. We show this in figure 4.

Figure 4: Two corporate data centers using DX SiteLink to communicate between NextDC S2 and NextDC P1 without traversing an AWS Region

Figure 4: Two corporate data centers using DX SiteLink to communicate between NextDC S2 and NextDC P1 without traversing an AWS Region

Configuring SiteLink on new and existing virtual interfaces is simply a matter of activating the feature using the Direct Connect console, Command Line Interface (CLI), or API. Once configured on a virtual interface, SiteLink will re-advertise routes learned from your routers via BGP to all other SiteLink enabled virtual interfaces.

Note: for seamless integration with SiteLink, each of your sites will need their own BGP ASN. For a deep dive into SiteLink architecture and routing, check out the Introducing AWS Direct Connect SiteLink and Advanced Routing scenarios with AWS Direct Connect SiteLink blog posts.

Migrating from existing Direct Connect Connections

If you have an existing Direct Connect connection, but want to migrate to 100 Gpbs, a connection capable of MACsec, or to another location, you must create a new connection and migrate over.

When making this change, it is important to balance migration effort against service interruption, and either migrate your existing virtual interfaces with a short connectivity outage as they’re moved, or create new VIFs and use BGP to move traffic without an outage. Regardless of which approach you choose, migration takes five steps:

  1. Create new Direct Connect connections and order cross-connects and/or partner connectivity at your chosen Direct Connect location. In Sydney, this would be NextDC S2, Equinix SY3, or Global Switch SY6
  2. Configure your on-premises network devices
  3. Create new virtual interfaces, if required
  4. Migrate to your new connections, either by moving your existing virtual interfaces, or making routing changes to move onto newly created ones, and fully test your connectivity
  5. Decommission your old connections

For an in-depth guide to this process, check out the Upgrading AWS Direct Connect to 100Gbps in 5 steps blog post.

Conclusion

In this blog post, we introduced the new DX location at NextDC S2, a second DX location in Sydney capable of both 100 Gbps speeds and MACsec encryption. Multiple DX locations with 100 Gbps speed and MACsec increases resiliency for both private and public connectivity, with optional encryption, into the AWS cloud for your workloads in Sydney. Close physical locality with customers in NextDC S2 reduces costs and latency for connectivity into the AWS Cloud, and enables architecture patterns using Direct Connect SiteLink for inter-site connectivity.

Tony Hawke

Tony Hawke

Tony is a Specialist Technical Account Manager – Networking based out of Canberra, Australia. Tony has been supporting AWS customers in Australia, New Zealand, and ASEAN across all industries since 2016. Prior to AWS, Tony architected and operated large LAN/WANs in enterprise and higher education.