AWS for Games Blog
Enhancing Game and Media Workflows with Global File Systems on AWS
In today’s global interconnected world, organizations often find themselves distributed across various geographical locations. It all begins as a small local studio, where artists collaborate in a single physical location, relying on local storage solutions like network-attached storage (NAS). As the studio begins to gain momentum and expand to a globally connected organization with studios in multiple locations the adoption of global file systems (GFS) becomes important. Specifically, video production and game development require efficient workflows that involve multiple departments and span different time zones. These workloads also prefer a follow-the-sun model of operation, where global teams can seamlessly pick up and continue projects around the clock. Efficient movement and synchronization of enormous media assets, including high-definition video and complex Unreal Engine data is important.
A global file system is a unified namespace, cloud-based file system that enables near real-time access to files from multiple geographic locations. The global namespace ensures that every file in the GFS can be uniquely identified and accessed with the help of metadata, regardless of where it is physically stored.
This article discusses why global file systems are indispensable and explores some use-cases and potential solutions using AWS and partner solutions.
In many cases, customers prefer a follow-the-sun model where work is passed between teams in different time zones to keep projects moving 24/7. Some organizations use a hub-and-spoke model, with one main region for uploading data and multiple regions for reading only. An efficient global file system is key to managing workflows across industries.
For instance, game development studios use GFS solutions to manage game assets, build distribution, and engine data for collaborative work among game designers, developers, and artists. An artist working in a content production application needs access to all files on the GFS volume regardless of their global location and needs to work with files at resolutions up to 4K without any performance lag. The media & entertainment (M&E) industry, particularly visual effects (VFX) and animation studios, greatly benefit from a Global File Systems. Advanced data management solutions provided by a GFS are essential for handling large files and complex data workflows. Additionally, such systems facilitate collaborative work across post-production houses, ensuring efficiency and seamless integration of efforts. Consider an editor who has flown from London to Los Angeles; the need to access and work with high-resolution assets on a local virtual desktop without lag becomes important. Similarly, advertising agencies create commercials, digital advertisements and marketing campaigns. They often require GFS solutions for managing multimedia assets, collaborating with clients, and sharing files across distributed teams and social media outlets globally.
Amazon Simple Storage Service (Amazon S3) Multi-Region Access Point + Mountpoint for Amazon S3
Ideal for Linux applications and processing large-scale workloads from terabytes to petabytes. Amazon S3 MRAP (Multi-Region Access Point) is a feature of Amazon S3 that allows you to create a single endpoint for accessing data that is replicated across multiple AWS Regions. Once the MRAP is created, you can use it to access data from any AWS Region by using the MRAP’s global endpoint. When applications access an MRAP global endpoint, AWS Global Accelerator is used to intelligently route requests over the AWS global network to the nearest active Amazon S3 bucket.
Mountpoint for Amazon S3 is a tool that allows you to mount an Amazon S3 bucket as a local file system. You can use standard file system operations to access our assets in an Amazon S3 bucket from any Linux-based application that can access a local file system.
MRAP and Mountpoint acts as a GFS by providing a single endpoint for accessing data that is replicated across multiple AWS Regions for Linux-based applications. Additionally, you can use Amazon S3 Transfer Acceleration to optimize transfer speed from your local workstation to an Amazon S3 bucket. When you write data to an MRAP, the data is automatically replicated to all of the AWS Regions that are associated with the access point.
Mountpoint is designed for applications that don’t require features like file locking or POSIX permissions. The next architecture discussed in the article would be a good choice if you need file locking.
Amazon File Cache with Amazon S3
Amazon File Cache is a high-speed cache on AWS that’s used to process file data, regardless of where the data is stored (on-prem file systems, AWS file systems, and Amazon S3 buckets).
Suitable for customers requiring a hub-and-spoke model for data operations, where one main region handles both read and write operations, while other regions are primarily for read-only access and the data remains synchronized across all regions. Amazon File Cache allows the consolidation of dispersed data from Amazon S3 buckets in multiple regions and provides unified view of files and directories. It delivers this data with sub-millisecond latency to applications running on AWS, and maintains high data throughput rates. It is based on AWS FSx for Lustre and is POSIX-compliant, so you can use your current Linux-based applications without making any changes. Studios can link their S3 buckets to the Amazon File Cache service for rapid and synchronized access, making it ideal for media-rich game development workflows. When linked to a data repository, a cache transparently presents Amazon S3 or NFS objects as files and directories which can be accessed using the open-source Lustre client. Amazon File Cache also synchronizes data between the cache and the Amazon S3 buckets.
NetApp Global File Cache (GFC) with Amazon FSx for NetApp ONTAP
For file lock and auditing capabilities, combining Amazon FSx for NetApp ONTAP with NetApp Global File Cache (GFC) is effective. FSx for ONTAP serves as a high-performance, secure asset storage on AWS, while GFC enhances global file access by caching frequently-used files at various locations. FSx for ONTAP holds your main assets and offers robust scalability. NetApp GFC enhances this by allowing global users to access this centralized storage with local performance. It does this by caching frequently-used files at various locations, speeding up access times significantly. GFC integrates to client workstation via direct drive mapping or DNS. NetApp GFC also comes with built-in auditing features for tracking file access/auditing, useful for compliance and security needs.
NetApp GFC uses a hub-and-spoke architecture. The central data store is the “hub,” and the distributed locations are the “spokes.” The distributed locations connect to the central data store via the GFC Fabric (secure and reliable network).
You need to deploy GFC core in the hub and GFC edge servers in all regions. The GFC Core acts as the control center. It’s typically deployed in an Amazon Elastic Compute Cloud (Amazon EC2) instance and decides what to cache, where, and when, while also managing compression and central file locks. The Global File Cache Edges are lightweight servers that are deployed at each AWS region where you need to sync data. These Edge servers cache files from FSx for ONTAP and serve them to local applications, ensuring quick and consistent data access globally. You can mount the GFC to both Windows and Linux. You can mount network drives that point to the GFC Edge server, making it easy to browse and access files using Windows Explorer or via POISX compliant APIs.
Hammerspace
For hybrid storage environments, Hammerspace, an AWS partner solution, offers a data orchestration platform integrating with AWS storage services and providing a global file system (GFS) solution for enterprises. It can integrate with various AWS storage services, like Amazon S3 and Amazon Elastic File System (Amazon EFS). This integration allows you to extend your existing storage infrastructure to create unified global namespaces across regions. You can use AWS security and DR capabilities. Hammerspace makes managing data across multiple locations easy through metadata nodes and data nodes. Metadata nodes facilitate direct communication between locations, ensuring that all the critical file information is synchronized in real-time. Meanwhile, data nodes use Amazon S3 for efficient data transfer. You can use the Hammerspace AMI from the AWS Marketplace and AWS CloudFormation for streamlined setup on AWS.
You can access the assets through Hammerspace nodes, almost like working with a traditional file system on your desktop.
It dynamically syncs essence data precisely when needed, making it a go-to choice for scenarios where numerous studios seek efficient collaboration in global file systems. You can use it for multi-region rendering in conjunction with AWS Thinkbox Deadline. For more information about multi-region rendering in conjunction with AWS Thinkbox Deadline, see the article Multi-Region Rendering with Deadline and Hammerspace.
Nasuni
Best suited for handling a small number of large files in hybrid storage environments, Nasuni offers a unified view of file data from Amazon S3 buckets, supporting large file sizes and high throughput necessary for game development.
Nasuni is a cloud-based file data platform which provides a unified view of all file data from S3 buckets. This makes it easy to manage and access file data from anywhere in the world, with local performance. Nasuni can support very large file sizes and high throughput, which is important for games and M&E studios that need to store and share large amounts of raw data. Support for large file sizes and high throughput is important for game development, as game assets can be very large.
You can deploy Nasuni Edge Appliances on-premises or on AWS with the Nasuni Marketplace AMI. These appliances cache frequently accessed files locally and connect to your AWS S3 buckets. You can use data retention and access controls via Amazon S3 policies for data management. You can access Nasuni volumes via standard network protocols like SMB (CIFS) or NFS.
Nasuni replicates data across multiple locations and deduplicates data at the file level, which reduces the amount of data that needs to be stored and transferred. With Nasuni Edge Appliances, frequently used files are cached locally for fast access, while all assets are stored in Amazon S3 in a hybrid model. Nasuni supports file versioning, built-in backup, and global file lock ensuring data consistency for collaboration.
A Global File System solution that fulfills key performance and collaboration requirements becomes more than just a technological tool. it transforms into a strategic asset for AWS customers by empowering AWS customers to redefine what’s possible in game & media production and its collaborative workflows. It allows the organizations to efficiently manage vast quantities of data, ensure seamless collaboration across time zones, and maintain high-performance access to essential assets.
Whether you’re looking to streamline game development or enhance media production workflows, AWS provides a suite of global file system solutions tailored to your needs.
Sign in to the AWS Management Console to start innovating, or explore our storage solutions on AWS — Learn more about AWS storage solutions. Get started today and push the boundaries of creative collaboration.