AWS for Industries

Accelerating EDA with the Agility of AWS and NetApp Data Services

Introduction

This post will walk you through the configuration of this powerful EDA solution and provide performance benchmarks. Semiconductor design simulation, verification, lithography, metrology, yield analysis, and many other workloads benefit from the scalability and performance of the AWS Cloud. For example, the by latest generation EC2 instance types enhance compute performance for these applications. Amazon FSx for NetApp ONTAP is a native AWS managed service delivering block and file storage integrated with NetApp’s portfolio of enterprise-grade data services. Designers get full access to ONTAP data capabilities, proven enterprise-grade features, management, performance, data protection, efficiencies, and the NetApp cloud portfolio. You can also integrate FSx for NetApp ONTAP with existing ONTAP systems.

As a fully managed AWS service, AWS supports FSx for NetApp ONTAP and the service built with the latest AWS compute, disk, and networking to provide maximum performance. It is tightly integrated with AWS services and easily managed by the AWS Console, CLI, or SDK.

You can configure FSx for NetApp ONTAP to act as both high-performance storage for EDA workloads in the cloud and a cache for tools and libraries on premises. To an EDA workload in the cloud, tools and libraries appear to be local. Not only is there no need to mirror all the tools and libraries to the cloud, but there is no need to actively manage a separate collection of versioned tools and libraries in the cloud. Get just the data you need, where and when you need it.

By creating a “reverse cache” in region 1, the results of jobs run in region 2 appear to be local to engineers working in region 1. In this way, customers can launch batch jobs in the cloud and debug them elsewhere without modifying their tools and processes.

Prerequisites

·      Two VPCs with VPC peering and non-overlapping CIDR ranges (don’t use the default VPCs)

·      Routing configured between the 2 VPCs

·      Linux based EC2 compute instance in each VPC

Create FSx NetApp ONTAP filesystems for origin and destination

We need to create the FSxN filesystems in each region.  These filesystems will hold the origin and Flexcache destination volumes in their VPCs.

Create FSxN Origin FS in Region primary VPC

Create FSxN origin filesystem in primary VPC (console)

1.     Sign in to the AWS Management Console and open the Amazon FSx console at https://console.aws.amazon.com/fsx/

2.     Choose Create file system and select Amazon FSx for NetApp ONTAP for the file system type.

3.     Select Standard create from Creation method

a.     Give the filesystem a name in File system name

b.     Enter the amount of SSD storage capacity to provision in GB.  We used 4096 for this example.

c.     (Optional) Provision SSD IOPS or Throughput capacity.  The default automatic provisioning is 3 IOPS per GB and 512MB/s for throughput.

4.    Fill in the Network and security section of the form

a.     Select the Virtual Private Cloud (VPC) you want the FSx for NetApp ONTAP filesystem to be deployed into

b.     Specify the correct VPC security groups to allow communications.

c.     Pick the Preferred subnet and Standby subnet to deploy the FSx for NetApp ONTAP network interfaces into.  The subnets need to be in different availability zones.

d.     VPC Route tables need to be updated to support FSxN.  If you have multiple route tables for different subnets in your VPC, be sure to select the VPC route tables that will forward traffic to your filesystem.

5.     Complete the Security & Encryption section of the create filesystem form.

a.     By default, the Encryption key for FSx for NetApp ONTAP uses the AWS/fsx AWS KMS key to encrypt data at rest.  If you want to use a customer managed key, please select it here.

b.     A password for the fsxadmin account is needed as we’re going to be logging into the console to create the FlexGroup and FlexCache volumes.

c.     Set a unique storage virtual machine name.  This SVM name should be unique, at least across the region to identify the SVM from other SVMs created.

d.     (Optional) set a password for vsadmin account and the password can be changed from the Ontap CLI and we don’t need to join a domain for this exercise.

6. (Optional) Enter the volume name, junction path, and volume size.  We’ll be creating a FlexGroup from the console in a few steps.  We recommend to enable storage efficiency.

7.     Choose next.

8.     You will see a summary page showing what is being created. Go through and review all the choices then choose Create file system.

Gather Inter-cluster endpoint IP addresses (console)

After the filesystem is created click on the filesystem and you’re presented with a details page similar to the following. Be sure to note down the Inter-cluster endpoint IP addresses. You’ll need these to create the peering connection later. These can be found in the endpoint section of the FSxN filesystem details page in the AWS console.

1.     Sign in to the AWS Management Console and open the Amazon FSx console at https://console.aws.amazon.com/fsx/

2.     In the navigation pane, choose Filesystems.

3.     Select the filesystem we created in the previous step and load the details page.

4.     Take note of the Management endpoint – IP address. We’ll use this address to ssh into the OnTap CLI later.

5.     Store the Inter-cluster endpoint – IP address somewhere. We need them later when we setup the peering connection between the two FSxN filesystems.

Repeat these steps in the second VPC

Configure cluster peering connection between FSxN filesystems

Before we can create a Flexcache we need to setup cluster peering between the FSxN filesystems. We’ll create cluster peering connection on origin filesystem and accept it on the peer filesystem.

Create cluster peering connection on origin filesystem (OnTap CLI)

1.     SSH to the origin FSxN filesystem

2.    Create the peering connection by entering the following command into the OnTap CLI.  Be sure to set the appropriate peer addresses from the Flexcache destination filesystem.

FsxId0789eb273224fce2e::> cluster peer create \
                            -peer-addrs [PEER_ADDRESSES_COMMA_SEPARATED] \
                            -generate-passphrase \
                            -applications snapmirror,flexcache  

Notice: 
        Passphrase: XXXXXXXXXXXXXXXXXXXXXXXX 
        Expiration Time: 10/28/2021 04:31:34 +00:00 
        Initial Allowed Vserver Peers: - 
        Intercluster LIF IP: 10.25.1.217 
        Peer Cluster Name: FsxId0b747c559ee19315d 
        
        Warning: make a note of the passphrase - it cannot be displayed again.

3.     Store the passphrase from the output of the prior command, because we’ll need to enter it when creating the peering connection on the destination FSxN filesystem.

Create cluster peering connection on destination filesystem (OnTap CLI)

1.     SSH to the destination FSxN filesystem

2.     Create the peering connection by entering the following command into the OnTap CLI.

a.     Set the appropriate peer addresses from the Flexcache origin filesystem (comma separated).

b.     Enter the passphrase that was returned from cluster peer create on the origin filesystem.

FsxId0b747c559ee19315d::> cluster peer create \
                          -peer-addrs [PEER_ADDRESSES_COMMA_SEPARATED] \
                          -applications snapmirror,flexcache 

Notice: Use a generated passphrase or choose a passphrase of 8 or more characters. To ensure the authenticity of the peering relationship, use a phrase or sequence of characters that would be hard to guess.

Enter the passphrase: 
Confirm the passphrase: 

Verify cluster peering connection (OnTap CLI)

1.     SSH to the origin FSxN filesystem

2.     Run the ‘cluster peer show’ command and make sure there are no errors.  Availability should show ‘Available’ and Authentication should show ‘ok’.

3.     Repeat for the destination FSxN filesystem

FsxId0b747c559ee19315d::> cluster peer show
Peer Cluster Name         Cluster Serial Number Availability   Authentication
------------------------- --------------------- -------------- --------------
FsxId0789eb273224fce2e    1-80-000011           Available      ok

Create vserver peering connection on origin FSxN filesystem (OnTap CLI)

1.     SSH to the origin FSxN filesystem

2.     Create the vserver peering connection by running the following command in the OnTap CLI

a.     For the peer vserver enter the SVM name from the destination FSxN filesystem

b.     For the peer cluster enter the cluster name of the destination FSxN filesystem (These are the Peer cluster names from the ‘cluster peer show’ command on the origin).

FsxId0789eb273224fce2e::> vserver peer create -vserver use1-fs01 -peer-vserver [DESTINATION_SVM_NAME] -applications snapmirror,flexcache -peer-cluster [DESTINATION_CLUSTER_NAME] 

Info: [Job 48] 'vserver peer create' job queued 

Accept the vserver peering connection on the cache

Create vserver peering connection on destination FSxN filesystem (OnTap CLI)

1.     SSH to the destination FSxN filesystem

2.     Create the vserver peering connection by running the following command in the OnTap CLI

a.     For the peer vserver enter the SVM name from the origin FSxN filesystem

b.     For the peer cluster enter the cluster name of the origin FSxN filesystem (These are the Peer cluster names from the ‘cluster peer show’ command on the destination).

FsxId0b747c559ee19315d::> vserver peer accept -vserver [ORIGIN_SVM_NAME] -peer-vserver [ORIGIN_CLUSTER_NAME]

Info: [Job 48] 'vserver peer accept' job queued 

Verify vserver peering connection (OnTap CLI)

1.     SSH to the origin FSxN filesystem

2.     Run ‘vserver peer show’ and verify the Peer state is ‘peered’.

FsxId0b747c559ee19315d::> vserver peer show
            Peer        Peer                           Peering        Remote
Vserver     Vserver     State        Peer Cluster      Applications   Vserver
----------- ----------- ------------ ----------------- -------------- ---------
usw2-fs01   use1-fs01   peered       FsxId0789eb273224fce2e 
                                                       snapmirror, flexcache 
                                                                      use1-fs01

Create origin flexgroup volume (OnTap CLI)

1.     SSH to the origin FSxN filesystem

2.    Run the following command to enable 64bit file identifiers.

a.    Enter the name of the origin SVM for the vserver option.

FsxId0789eb273224fce2e::> set advanced; vserver nfs modify -vserver [ORIGIN_SVM_NAME] -v3-64bit-identifiers enabled

Warning: These advanced commands are potentially dangerous; use them only when directed to do so by NetApp personnel.
Do you want to continue? {y|n}: y


Warning: You are attempting to increase the number of bits used for NFSv3 FSIDs and File IDs from 32 to 64 on Vserver "use1-fs01". This could result in older client software no longer working with the volumes owned by Vserver "use1-fs01".
Do you want to continue? {y|n}: y

Warning: Based on the changes you are making to the NFS server on Vserver "use1-fs01", it is highly recommended that you remount all NFSv3 clients connected to it after the command completes.
Do you want to continue? {y|n}: y

3.     Create the flexgroup volume with the following command

a.     For the vserver option enter the name of the SVM you want to host the origin volume.

b.     The value for volume is the desired volume name.

c.     The aggr-list option should be set to ‘aggr1’.

d.     Set the aggr-list-multiplier to 8.

e.     Set the junction-path of the volume.

f.      Enter 1T for the value on size.

FsxId0789eb273224fce2e::> vol create -vserver [ORIGIN_SVM_NAME] -volume [ORIGIN_VOLUME_NAME] -aggr-list aggr1 -aggr-list-multiplier 8 -junction-path /[ORIGIN_VOLUME_NAME] -size 1T

Notice: The FlexGroup volume "fg01" will be created with the following number of constituents of size 128GB: 8.
Do you want to continue? {y|n}: y
[Job 49] Job succeeded: Successful     

Create destination flexcache volume (OnTap CLI)

1.     SSH to the destination FSxN filesystem

2.     Enter the following command into the OnTap CLI to create the destination flexcache volume.

a.     Enter the name of the destination SVM for the vserver option.

b.     For the volume option enter the destination volume name.

c.     The aggr-list option should be set to ‘aggr1’.

d.     Set the aggr-list-multiplier to 8.

e.     Enter 1T for the value on size.

f.      Use the origin SVM name for the origin-vserver option.

g.     Set the junction-path of the volume.

FsxId0b747c559ee19315d::> vol flexcache create -vserver \ [DESTINATION_SVM_NAME] -volume [DESTINATION_VOLUME_NAME] -aggr-list aggr1 \ 
-aggr-list-multiplier 8 -origin-volume fg01 -size 1T \
-origin-vserver [ORIGIN_SVM_NAME] -junction-path /[DESTINATION_VOLUME_NAME]
[Job 60] Job succeeded: Successful.    

Mount the volumes on the clients (Linux CLI)
1.     Ssh to the linux instance in the origin VPC
2.     Use the following command to mount the filesystem.  These mount options are tuned for EDA workloads.
a.  FSx Data hostname for server
b. Enter the volume name for the vol

 
Mounting the volume
 %> mount -t nfs -o "nocto,actimeo=600,hard,rsize=262144,wsize=262144,vers=3,tcp" <server>:/<vol> /vol
 

Configuration Summary

As geometries shrink and designs get more complex customers are looking for ways to rapidly scale infrastructure. The solution presented provides a simple method to cache storage between locations allowing for seamless shifting of workloads between regions while giving engineers a consistent environment.

For more information, see NetApp’s Electronic Design Automation site: https://cloud.netapp.com/solutions/electronic-design-automation

Benchmarks

Using an industry standard benchmark, we attempted to show the benefits of read caching. However, the results were “too good to be true” due to client-side caching and the limited working set of the benchmark tool.

However, we were able to run benchmarks against the scratch and concluded that latency remains flat as performance scales with the addition of Filesystems.

The complete workload is a mixture of concurrently running functional and physical phases and, as such, represents a typical flow from one set of EDA tools to another.

The functional phase consists of initial specifications and logical design. The physical phase takes place when converting the logical design into a physical chip. During the sign-off and tape-out phases, final checks are completed, and the design is delivered to a foundry for manufacturing. Each of these phases present differently when it comes to storage.

The functional phases are metadata intensive—file stat and access calls—though they do include a mixture of both sequential and random read and write I/O as well. Although metadata operations are effectively without size, the read and write operations range between less than 1K and 16K; the majority of reads are between 4K and 16K. Most writes are 4K or less. The physical phases, on the other hand, are entirely composed of sequential read and write operations, and a mixture of 32K and 64K OP size. Most of the throughput shown in the graphs above comes from the sequential physical phases of workload, whereas the I/O comes from the small random and metadata intensive functional phases–both of which happen in parallel.

For more information, see NetApp’s Electronic Design Automation site: https://cloud.netapp.com/solutions/electronic-design-automation

Craig Chamberlin

Craig Chamberlin

Craig Chamberlin is a Solution Architect at AWS. He has a background in EDA infrastructure focusing compute and storage.

Chad Morgenstern

Chad Morgenstern

Chad Morgenstern is North Carolina transplant and a Principal Technologist NetApp’s office of the CTO. A 15 -year veteran at NetApp, Chad has focused his career on benchmarking, tuning, and telling the story about the performance journey to whomsoever will listen. Among other roles, Chad is the performance lead for Cloud Volumes Service for GCP. Chad is happily married and is the father of five daughters, and owner of one yellow parrot.

Jim Holl

Jim Holl

Jim Holl is a silicon valley native and Principal Engineer in NetApp’s Office of the CTO where he strives to turn everything into a public cloud service.

Virgilio Inferido

Virgilio Inferido

Virgilio Inferido is a Technical Marketing Engineer at NetApp. He has extensive experience on Cloud Computing and Storage Performance Benchmarking.