AWS for M&E Blog

Virtual prototyping with Autodesk VRED on AWS

Figure 1: Autodesk VRED provides high-end rendering and streaming of complex digital assets

Figure 1: Autodesk VRED provides high-end rendering and streaming of complex digital assets

The VRED set of software tools from Autodesk let designers create and present high-quality product renderings of complex digital assets, such as automotive vehicles and other engineering-related artefacts. VRED has traditionally used an array of hardware to increase productivity, including multiple GPU and CPU configurations both locally and remotely, making low-latency and high-bandwidth networks and file systems necessary. Typically, this has required a substantial capital investment in hardware, leading to renewal cycles that are typically much longer than advances in technology for GPU and CPU architectures.

Leveraging a wide range of Amazon Elastic Compute Cloud (Amazon EC2) instance types and related services from Amazon Web Services (AWS), studios can now create workstations and cluster render nodes, with underlying performant networking, for a range of design and visualization workloads. This allows studios to capitalize on additional benefits:

  1. Costly up-front capital investment in hardware can be replaced with an OpEx model, using the latest hardware at a moment’s notice, on demand, from AWS.
  2. Infrastructure such as workstations and access to compute clusters for rendering can dynamically scale in line with design team sizes, enabling resources for teams to achieve their deadlines.
  3. Teams can adopt remote workflows with relative ease.

Underlying technology      

The fifth generation graphics instance, Amazon EC2 G5, boasts an impressive arsenal of hardware features for GPU shading-unit based parallel processing, NVIDIA OptiX workloads, and machine learning – all of which are used by VRED to create fast, accurate, ray-traced imagery.

CUDA Cores / Shading Units 10,240
Ray Tracing Cores 80
Tensor Cores 320
Memory 24 GiB

The G5 instance comes in a variety of GPU, CPU, and memory sizes. These can be used to provide single, quad, or octet GPU-based workstations running either VRED Pro or a larger pool of machines hosting VRED Core, allowing designers to increase their render and streaming capabilities via clustering. With Amazon EC2 instances, machines can be stopped, reshaped into different sizes, and resumed, allowing the customization of the underlying instance to suit the task immediately at hand.

Instance Size GPUCount vCPUs Memory (GiB) Network Bandwith (Gbps)
G5.4xl 1 16 64 Up to 25
G5.8xl 1 32 128 25
G5.12xl 4 48 192 40
G5.24xl 4 96 384 50
G5.48xl 8 192 768 100

As an alternative, CPU-based cluster fleets can use VRED Render on an array of cost-effective Amazon EC2 compute-based instance classes and sizes, such as the M6a – using the AMD 3rd generation EPYC chipset, with up to 192 vCPUs.

Architecture

For single node installations, Amazon EC2 G5 instances can run VRED Pro on Windows Server to function as an artist workstation delivering high-quality graphics and rendering. Performant remote display protocols such as NICE DCV from AWS or HP Anyware (formerly Teradici PCoIP) can stream applications to any device, over varying network conditions and in a secure manner. With these remote display protocols, customers can access graphics-intensive applications such as VRED Pro remotely from simple client machines such as standard laptops or small form factor thin-clients such as Intel NUCs, eliminating the need for expensive dedicated workstations. This also offers the ability to work wherever a suitable network permits (we recommend a 20 Mb/s internet connection for dual 4k monitors).

To quickly load and transport VRED scene files and supporting data, performant file systems such as Amazon FSx for Windows File Server with fast underlying SSD storage can be mounted on workstation instances. This allows artists to easily share projects and render using offline machines for seamless collaboration. To achieve a rich design experience, using input devices such as Wacom Tablets in a lag-free manner, we recommend a latency of 25ms or less between the artist workstation instance and the end user client; with AWS you can accomplish this by creating your instances in one of the many global Regions or Local Zones that are closest to your designers.

The following diagram depicts a simple VRED Pro workstation setup on AWS:

Architectural diagram showing multiple workstations running VRED Pro on AWS, with FSx for Windows providing a file system for shared project data.

Figure 2: Architectural diagram showing multiple workstations running VRED Pro on AWS

Cluster workflows

Additional CPU or GPU-based instances can be formed into clusters within VRED environments, to allow the distribution of render tasks away from a single machine. This can be used to accelerate the rendering of images, or to increase the performance of a real-time streaming session. A VRED cluster consists of a main node (e.g. VRED Pro or Core) and multiple cluster render nodes, which are connected using a low latency network. Cluster nodes can be used elastically to augment a workstation as and when needed to bolster performance.

In the following diagram, the artist workstations / main nodes are deployed using a Windows G5 instance as previously discussed running VRED Pro on a Windows Server operating system. The VRED Render node cluster is built using G5 instances running a Linux operating system, with VRED Core installed to allow CPU and GPU rendering. The same render node cluster can be shared among multiple artists if the cluster has enough resources available to support the aggregate workload across the multiple main instances. Alternatively, multiple cluster node fleets can be created to scale to requirements. For workflows that require low latency and the highest bandwidth, the cluster components can be placed within an AWS Cluster Placement Group. This increased proximity enables higher per-flow throughput, both for scene and pixel data streaming, increasing frames per second.

Architectural diagram incorporating additional clusters of VRED Core utilizing additional G5 instances, within a cluster placement group.

Figure 3: Architectural diagram incorporating additional clusters of VRED Core

Configuring the cluster

In order for the render nodes to successfully connect and communicate with the main node, their IPs need to be added to the cluster settings within the VRED Pro UI. This is easily achieved programmatically with the following steps:

  • Add the AWS SDK for Python module (boto3) to the relevant VRED Pro install, so that VRED can interact with AWS services such as EC2. This can be done from a Windows Command Prompt as Administrator:
cd C:\Program Files\Autodesk\VREDPro-15.0\lib\python
python.exe -m pip install boto3

Note: ensure that you add the boto3 module to the appropriate VRED install location.

We use the ‘requests’ HTTP library to access the Instance Metadata Service to query (and automate) the determination of which region the cluster machines are running in; this will also need installing in the same console:

python.exe -m pip install requests
  • Authenticate your credentials to your AWS account – this is done by following the Boto3 configuration
  • From within the Script Editor inside VRED Pro, use this simple script to query running Amazon EC2 Instances serving as cluster machines, and add their private IP addresses automatically to the VRED Cluster.
import boto3
import requests

# Determine region information by querying the VRED Pro instance’s meta-data
response = requests.get('http://169.254.169.254/latest/meta-data/placement/region')
region = response.text

# define ec2 interface to our chosen region
ec2 = boto3.resource('ec2',region_name=region)

# Determine filters for finding instances.
# In this case we looking for running instances,
# and instances that are using a naming convention.
filters = [{'Name':'tag:Name', 'Values':['VREDCluster']},
{'Name': 'instance-state-name', 'Values': ['running']}]

# Query AWS interface for instances with our filters
instances = ec2.instances.filter(Filters=filters)

# Extract private ip addresses from those instances
private_ips = [ instance.private_ip_address for instance in instances]

# Define the list of cluster machines in VRED
setClusterNodes(' '.join(private_ips))

Components within AWS can be tagged to allow their isolation for the purposes of billing, etc. In this example, we used the Name tag to identify cluster machines from other Amazon EC2 instances (the Name is VREDCluster).

The previous process can be extended with additional VRED API code to elegantly integrate with the VRED main window if desired, to enable a simpler user experience.

Streaming

Providing a seamless interaction with both the display and input devices of cloud-based workstations is critical to the overall experience. This needs to include support for multiple 4K monitors, 4:4:4 color, and pressure sensitive Wacom tablet input. There are a number of solutions that meet these requirements, including NICE DCV. NICE DCV provides these features, and is licensed on all Amazon EC2 instances at no additional cost. With NICE DCV, artists globally can access flexible, secure, high-performing, and cost-effective virtual workstations on AWS that remove technological and geographic barriers for artists.

The VRED Stream App is a web interface for VRED that delivers real-time streaming of design content, with presentational controls for switching viewpoints and variants. Using VRED Stream, artists can run VRED on an EC2 instance and stream the session to browser(s) for presentation and review purposes. To enable VRED stream, follow instructions on the Autodesk VRED Stream page. Once streaming is enabled, an artist can provide the public IP address of the main node Amazon EC2 instance for reviewers to access the VRED stream.

To increase the performance of the streaming sessions, the EC2 instances can use the AWS Global Accelerator network service, allowing streaming sessions to intelligently use the AWS global network to accelerate performance, rather than relying on the public, congestion-prone, internet-to-route sessions.

Performance optimization

AWS documents the setup and optimization of drivers for GPU-based instances, for both Windows and Linux; it is highly recommended that you follow this documentation to ensure the correct NVIDIA drivers are used for optimal performance.

By default, Amazon EC2 G5 instance types are provisioned with NVIDIA Error Correction Code (ECC) enabled – a method to introspectively look for potential pixel errors known to GPU cards. An on-balance recommendation from Autodesk is to disable this feature for VRED workflows to reduce the memory checking overhead, and gain a significant speed performance (the feature can be re-enabled in the event of any debugging needs).

ECC will be activated on every fresh Start of an EC2 instance, and disabling requires a reboot. Automating the process to turn if off can be achieved via a traditional start up script, or a User Data script that happens upon launch. For a Windows instance, the User Data Script would be typically:

<powershell>
Write-Host 'ECC Checking'
if ( &'C:\Program Files\NVIDIA Corporation\NVSMI\nvidia-smi.exe' -q -d ECC | Select-String "Enabled") 
{
    &'C:\Program Files\NVIDIA Corporation\NVSMI\nvidia-smi.exe' -e 0
    Write-Host 'ECC Enabled: Turning off ECC' 
    Restart-Computer
} else {
    Write-Host 'ECC Disabled : Nothing to do' 
}
</powershell>
<persist>true</persist>

NOTE: A Linux based version for cluster and render machines could follow a similar methodology.

Summary

Our vision is to enable artists and digital designers to operate at scale and provide a cost-optimized solution in partnership with Autodesk. With Autodesk VRED on AWS, artists can now quickly create and present product renderings, design reviews, and virtual prototypes using pay-as-you-go infrastructure.

We hope that you have learned how VRED Pro and VRED Core can be configured and integrated on AWS. We also discussed the benefit of deploying VRED in the cloud, both in terms of expenditure models, but also in terms of dynamically scaling and provisioning architecture on-demand, as well as performance optimization techniques.

We look forward to working with our customers and as always, if you have any questions or feedback please reach out to your account team.

Further reading

You may also find our recent NVIDIA CloudXR quick-start guide, regarding the deployment of extended reality experiences on AWS of interest:

https://github.com/aws-quickstart/quickstart-nvidia-cloudxr

The AWS Thinkbox Deadline render farm manager can also be used to integrate VRED rendering and clustering services into a unified scheduling environment with other products such as Autodesk Maya. You can use this application completely on-premises, to enable hybrid workflows and burst into the cloud when needed, or to orchestrate your farm completely in the cloud.

https://www.awsthinkbox.com/deadline

Looking at GPU utilization for not only your cluster machines, but your workstations too, can be helpful in determining the efficiency of scaling GPU numbers in relation to parallel compute processes within VRED. This blog details how to setup and capture useful logs of GPU data for analysis.

https://aws.amazon.com/blogs/machine-learning/monitoring-gpu-utilization-with-amazon-cloudwatch/

About Autodesk

As a world leader in 3D design, engineering, and entertainment software, Autodesk delivers the broadest product portfolio, helping over 10 million customers, including 99 of the Fortune 100, to continually innovate through the digital design, visualization, and simulation of real-world project performance.

Bhavisha Dawada

Bhavisha Dawada

Bhavisha Dawada is a senior solutions architect at AWS.

Andy Hayes

Andy Hayes

Andy is a Senior Solutions Architect, Visual Computing at AWS.

David Randle

David Randle

David Randle is Head of Spatial Computing GTM, Visual Computing at AWS.

DJ Rahming

DJ Rahming

DJ is a Senior Solutions Architect, Visual Computing at AWS.

Mike Owen

Mike Owen

Mike is a Principal Solutions Architect, Visual Computing at AWS.