AWS Compute Blog

NEW: Simplifying the use of third-party block storage with AWS Outposts

This post is written by Kate Sposato, Senior Solutions Architect, EC2 Edge Compute

AWS is excited to announce deeper collaboration with industry-leading storage solutions to streamline the use of third-party storage with AWS Outposts. You can now attach and use external block data volumes from NetApp® on-premises enterprise storage arrays and Pure Storage® FlashArray™ directly from the AWS Management Console.

Outposts is a fully managed service that extends AWS infrastructure, AWS services, APIs, and tools to customer premises. By providing local access to AWS managed infrastructure, Outposts allows you to build and run applications on premises using the same application programming interfaces (APIs) as in AWS Regions. Moreover, this is done while using local compute and storage resources to meet lower latency and local data processing needs. Outposts is available in various rack and server form factors.

Many of you have block storage systems running in your on-premises environments that provide advanced data storage and management features—such as snapshots, replication, and encryption—to protect data integrity and security. There are various uses cases that would predicate you needing to access data through these external volumes backed by external storage systems from an application running in Amazon Elastic Compute Cloud (Amazon EC2) instances on Outposts. These include: regulatory auditing requirements, government and local regulation compliance, high data durability and resiliency requirements, low-latency data access, and migration of on-premises applications that are tightly coupled with existing external storage systems. To make it easier for you to use external volumes with Outposts, AWS has validated a broad range of third-party storage solutions through the AWS Outposts Ready Program. With this program, you can easily identify storage solutions that are tested to run with Outposts.

Today, we are taking our integration with storage solutions from NetApp and Pure Storage to the next level. Outposts now has a simplified and automated way to launch EC2 instances with attached block storage from external infrastructure through the AWS Management Console. The new integration includes automated user script generation and attachment of data volumes to EC2 instances running on 42U Outposts racks and 2U Outposts servers. This integration reduces the friction associated with using the advanced data management and security features of external storage infrastructure in combination with Outposts, allowing you to create a resilient, compliant, and optimized storage and compute infrastructure.

Outposts rack storage and networking overview

Outposts racks support Amazon Elastic Block Store (Amazon EBS) volumes for EC2 instances, which provide persistent local block storage.

EC2 instances running on Outposts racks can access data stored on external block storage arrays over the Outposts local gateway (LGW). An LGW enables connectivity between the Outpost subnets, where EC2 instances run, and the on-premises network. It carries storage traffic between the EC2 instances running on the Outposts rack and the local network. The LGW is created by AWS as part of the Outposts rack installation process. Each Outposts rack supports a single LGW.

The following diagram shows an EC2 instance running on an Outposts rack with an elastic network interface (ENI) and LGW configured for instance connectivity. An external storage array communicates with the EC2 instance running on the Outposts rack through the Outpost network devices (ONDs). Customer Network Devices (CNDs) that connect to EC2 instances running on Outposts racks need to support the following:

  • Link aggregation: connections to the Outposts rack network devices are added to a link aggregation group (LAG).
  • VLANs: Virtual LANs (VLANs) are configured between each Outposts rack TOR device and any customer devices, including data stores.;
  • Dynamic routing: Border Gateway Protocol (BGP) is configured between the CND and the OND for each VLAN. Two total BGP sessions are shown in the following diagram between devices.

Figure 1. Outposts rack and Amazon EC2 networking architecture

Figure 1. Outposts rack and Amazon EC2 networking architecture

Outposts server storage and networking overview

Outposts servers come with internal NVMe SSD-based high-performance instance storage. Similar to AWS Regions, instance storage is allocated directly to the EC2 instance and follows the lifecycle of the instance. For example, if an EC2 instance is terminated, then the instance storage associated with the instance is also deleted. If you want data to persist after the instance is terminated, you can use external storage solutions to complement the instance storage included with Outposts servers.

Outposts servers have a local network interface (LNI). This logical networking component connects the EC2 instances running on the Outposts servers subnet to the on-premises network and allows communication to other on-premises storage, compute, and networking appliances.

To support the Amazon EC2 on Outposts to external storage array integration, an LNI must be created then added to the EC2 instance during instance launch. An LNI can only be created through the AWS Command Line Interface (AWS CLI) or the AWS software development toolkit (SDK) using the following command. The subnet id is the Outposts server subnet and the device index should be unique to the subnet.

aws ec2 modify-subnet-attribute --subnet-id <subnet id> --enable-lni-at-device-index <device index>

In the on-premises network, you must have a Network Interface Card (NIC) at the same device index that you specified when running the preceding CLI command.

Further detailed steps for this workflow are listed in the Outposts server user guide.

When the local network interfaces are enabled on an Outpost subnet, the EC2 instances in the Outpost subnet can be configured to include this LNI in addition to the ENI. The LNI connects to the on-premises network while the ENI connects to the VPC.

The following diagram shows an EC2 instance running on an Outposts server with both an ENI and LNI configured for instance connectivity. There is an external storage array connected to the Outposts server using a CND through NVMe-over-TCP or iSCSI protocol. Figure 2. Outposts server and Amazon EC2 networking architecture

Figure 2. Outposts server and Amazon EC2 networking architecture

Supported operating systems and AWS Support

The rest of this post covers the steps for how to launch an EC2 instance running on an Outposts 2U server or Outposts rack with a connected external block storage volume for local data access from within the EC2 instance. The current release of this feature supports EC2 instances running Microsoft Windows Server 2022 and Red Hat Enterprise Linux 9 (RHEL9) based operating systems.

Support for Outposts and all Outposts integration features, including this one, needs an active AWS Enterprise Support Plan or AWS Enterprise On-Ramp Support Plan. Support for external storage arrays and configurations can be obtained from the respective storage vendor and may need an additional support plan depending on the vendor and the storage solution implemented.

This post assumes you’re familiar with the basic functionality of Outposts servers and Outposts rack. If you would like to learn more about the Outposts family in general, then the user guide, What is AWS Outposts?, is a great place to start.

Solution deployment

The following sections outline the solution deployment.

Prerequisites:

  1. An Outposts 2U server or Outposts rack is provisioned, activated, and connected to the customer network.
  2. A block storage array is connected on the same network and accessible to Outposts subnets.
  3. A block data volume is configured and running on the storage array. The unique identifier for this volume is necessary for launching the EC2 instance on the Outpost. The volume must remain provisioned after initial provisioning on the storage array.
  4. The IP address and port number (optional for iSCSI connections) of the block storage volume, which is necessary for launching the EC2 instance on the Outpost.

Deployment architecture overview

The following deployment architecture shows the workflow attaching an external storage array to an Outpost, launching an EC2 instance through the AWS Management Console, and accessing the data on the external storage array from within the EC2 instance running on the Outpost.Figure 3. Third-party block storage on Outposts architecture overview

Figure 3. Third-party block storage on Outposts architecture overview

Deployment steps for NVMe-over-TCP connections

1. (Prerequisite) If there is no block data volume already running and configured on the compatible storage array, this must be completed in the storage solution’s interface before moving to Step 2.

a. Create an NVMe device, subsystem, and namespace for the block data volume.

b. Optionally, generate a host NQN that is used for the EC2 instance connection, and add it to the allow list for the appropriate subsystems.

c. The following pieces of information are used in later steps:

i. Host NQN: Unique identifier of the EC2 instance for attachment;

ii. Target IP: Address of the connected block volume host;

iii. Target Port Number: Port number of the connected block volume host.

You can learn more about launching and configuring external storage arrays in the Outposts family documentation or in the respective storage array vendor documentation.

2. In the Console, navigate to EC2 Launch Instance Wizard by choosing EC2, Instances, Launch instances.

a. Name the instance and add any desired tags to be applied at launch.

b. Choose the desired, compatible RHEL9 based Amazon Machine Image (AMI) from the list, or choose one from the AWS Marketplace.

c. Choose the desired EC2 Instance type.

d. Expand the Network settings section and select Edit. Choose the VPC and subnet of the target Outpost.

i. Outposts servers only: You must create an LNI in the Advanced Network settings before launching the instance.

e. Expand Advanced network configuration and select Add network device. Continue to add network devices until the Device index is equal to the volume index.

Figure 4. Advanced network configurationFigure 4. Advanced network configuration

f. Expand Configure storage and select Edit next to External storage volumes settings section and choose NVMe/TCP in Storage network protocol.

Figure 5. External storage volumes configuration

g. Enter the HostNQN in the format provided for the NVMe/TCP data volume. Make sure that the HostNQN used has been added to the storage array subsystem allow list.

h. Select Add NVMe/TCP Discovery Controller and enter the IP address and port of the controller from the storage array. Enter 4420 as the Target Port, if the target port is unknown.

i. (Optional) You can add more data volumes that use a different target discovery controller at this time by choosing the Add NVMe/TCP Data Volume button under the Target IP address. Repeat Steps 2.h for each data volume to be attached to the EC2 instance.

j. Expand the Advanced details and provide any additional Amazon EC2 behavior settings as appropriate.

k. At the bottom of the Advanced details section is the automatically generated User data. If you need to manually edit this data, you can do so by selecting Edit at the bottom.

Figure 6. Automatically generated user data file

l. When the configurations are set, choose the Launch instance button in the right-side column.

3. The EC2 Launch Instance Wizard now launches an EC2 instance configured as described on the Outpost and attaches the desired external data volume(s) to the EC2 instance.

4. Applications and users can access the data on the attached external volumes from within the EC2 instance. To verify this:

a. From within the launched EC2 instance, run sudo nvme list

b. The volumes are displayed as /dev/nvme1n1 with the number increasing for each attached volume. Local instance store volumes on Outposts servers and EBS boot volumes on Outposts racks are listed first. External volumes are listed after those with sequentially increasing node numbers.

5. External storage volume and array management, configuration, and backups continue to be managed through the storage vendor-provided toolkit. You can find more information on external storage management in the respective storage array vendor documentation.

Deployment steps for iSCSI connections

1. (Prerequisite) If there is no block data volume already running and configured on the compatible storage array, this must be completed in the storage solution’s interface before moving to Step 2.

a. Create an Initiator group (igroup) and add the Initiator IQN to the igroup. Then map the logical unit number (LUN) to the igroup.

b. Optionally, generate an initiator IQN that is used for the EC2 instance connection, and add it to the allow list for the appropriate subsystems.

c. The following pieces of information are used in later steps:

i. Initiator IQN: Unique identifier of the EC2 instance for attachment;

ii. Target IQNs: Unique identifier of the storage virtual machine (SVM);

iii. Target IP: Address of the connected block volume host;

iv. (Optional) Target Port Number: Port number of the connected block volume host.

You can learn more about launching and configuring external storage arrays in the Outposts family documentation or in the respective storage array vendor documentation.

2. In the Console, navigate to EC2 Launch Instance Wizard by choosing EC2, Instances, Launch instances.

a. Name the instance and add any desired tags to be applied at launch.

b. Choose the desired, compatible RHEL9 or Windows Server 2022 based AMI from the list, or purchase one from the AWS Marketplace.

c. Choose the desired EC2 Instance type.

d. Expand the Network settings section and choose the VPC and subnet of the target Outpost.

i. Outposts servers only: You must create an LNI in the Advanced Network settings before launching the instance.

e. Expand Advanced network configuration and select Add network device. Continue to add network devices until the Device index is equal to the volume index.

Figure 7. Advanced network configurationFigure 7. Advanced network configuration

f. Expand Configure storage and select Edit next to External storage volumes settings section and choose iSCSI in Storage network protocol.

Figure 8. External storage volumes configurationFigure 8. External storage volumes configuration

g. Enter the Initiator IQN for the iSCSI data volume in the format provided. Make sure that the Initiator IQN used has been added to the allow list for the volume.

h. Select Add iSCSI Target and enter the Target IP, Target Port, and Target IQN of the storage array. Enter 4420 for the Target Port, if the target port is unknown.

i. (Optional) You can add additional data volumes with a different Target IQN at this time by selecting the Add iSCSI Target button under the Target IP address. Repeat Steps 2.h for each data volume to be attached to the EC2 instance.

j. Expand the Advanced details and provide any additional Amazon EC2 behavior settings as appropriate.

k. At the bottom of the Advanced details section is the automatically generated User data. If you need to manually edit this data, you can do so by selecting Edit at the bottom.

Figure 9. Automatically generated user data fileFigure 9. Automatically generated user data file

l. When the configurations are set, choose the Launch instance button in the right-side column.

3. The EC2 Launch Instance Wizard now launches an EC2 instance configured as described on the Outpost and attaches the desired external data volume(s) to the EC2 instance.

4. Applications and users can access the data on the attached external volumes from within the EC2 instance. To verify this:

a. From within the launched EC2 instance, run iscsiadm -m session -P3

b. The volumes are displayed as /dev/sd0 with the number increasing for each attached volume.

5. External storage volume and array management, configuration, and backups continue to be managed through the storage vendor-provided toolkit. You can find more information on external storage management in the respective storage array vendor documentation.

Conclusion

This integration offers a streamlined workflow to attach and utilize external block data volumes on Outposts directly through the AWS Management Console, eliminating manual processes. It provides the full benefits of advanced data infrastructure from trusted storage providers in conjunction with the security, reliability, and scalability of AWS managed infrastructure. This helps you accelerate cloud migration with dependencies on third-party storage and realize the full potential of your on-premises data.

To learn more about this integration, visit the NetApp on-premises enterprise storage arrays for AWS Outposts solution page and the Pure Storage FlashArray for AWS Outposts blog post. To discuss your external storage needs with an Outposts expert, submit this form. If you are attending AWS re:Invent 2024, make sure to check out the NetApp booth (booth #1748) and Pure Storage booth (booth #454) to connect with our partner specialists.