AWS HPC Blog

Category: Announcements

Discontinuation of NICE EnginFrame effective September 25th, 2025

After careful consideration, we have made the decision to discontinue NICE EnginFrame including NICE EnginFrame views, effective September 25, 2025. If you want to continue using NICE EnginFrame beyond the end-of-support date, we recommend contacting NI-SP, an AWS partner with decades of experience implementing and supporting NICE EnginFrame for enterprises.

BioContainers are now available in Amazon ECR Public Gallery

Today we are excited to announce that all 9000+ applications provided by the BioContainers community are available within ECR Public Gallery! You don’t need an AWS account to access these images, but having one allows many more pulls to the internet, and unmetered usage within AWS. If you perform any sort of bioinformatics analysis on AWS, you should check it out!

Call for participation: RADIUSS Tutorial Series

Lawrence Livermore National Laboratory (LLNL) and AWS are joining forces to provide a training opportunity for emerging HPC tools and application. RADIUSS (Rapid Application Development via an Institutional Universal Software Stack) is a broad suite of open-source software projects originating from LLNL. Together we are hosting a tutorial series to give attendees hands-on experience with these cutting-edge technologies. Find out how to participate in these events in this blog post.

Introducing the Spack Rolling Binary Cache hosted on AWS

Today we’re excited to announce the availability of a new public Spack Binary Cache. In a collaboration, between AWS, E4S, Kitware, and the Lawrence Livermore National Laboratory (LLNL), Spack users now have access to a public build cache hosted on Amazon S3. The use of this Binary Cache will result in up to 20x faster install times for common Spack packages.

Introducing AWS HPC Connector for NICE EnginFrame

Today we’re introducing AWS HPC Connector, a new feature in NICE EnginFrame that allows customers to leverage managed HPC resources on AWS. With this release, EnginFrame provides a unified interface for administrators to make hybrid HPC resources available to their users both on-premises and within AWS. In this post, we’ll provide some context around EnginFrame’s typical use cases, and show how you can use AWS HPC Connector to stand up HPC compute resources on AWS.

Coming soon: dedicated HPC instances and hybrid functionality

This year, we’ve launched a lot of new capabilities for HPC customers, making AWS the best place for the length and breadth of their workflows. EFA went mainstream and is now available in sixteen instance families for fast fabric capabilities for scaling MPI and NCCL codes. We’ve written deep-dive studies to explore and explain the optimizations that will drive your workloads faster in the cloud than elsewhere. We released a major new version of AWS ParallelCluster with its own API for controlling the cluster lifecycle. AWS Batch became deeply integrated into AWS Step Functions and now supports fair-share scheduling, with multiple levers to control the experience. Today we’re signaling the arrival of a new HPC-dedicated instance family – the Hpc6a – and an enhanced EnginFrame that will bring the best of the cloud and on-premises together in a single interface.

Figure 6: With three shareIdentifier values and 75% capacity reserved, each identifier has exclusive access to 25% of the capacity.

Introducing fair-share scheduling for AWS Batch

Today we are announcing fair-share scheduling (FSS) for AWS Batch, which provides fine-grain control of the scheduling behavior by using a scheduling policy. With FSS, customers can prevent “unfair” situations caused by strict first-in, first-out scheduling where high priority jobs can’t “jump the queue” without draining other jobs first. You can now balance resource consumption between groups of workloads and have confidence that the shared compute environment is not dominated by a single workload. In this post, we’ll explain how fair-share scheduling works in more detail. You’ll also find a link to a step-by-step workshop at the end of this post, so you can try it out yourself.