AWS HPC Blog
Category: Customer Solutions
How BAM supercharged large scale research with AWS Batch
Balyasny Asset Management (BAM), a $22B global investment firm, faced a unique challenge: how to empower 160 investment teams to conduct cutting-edge research across six strategies. Discover how they leveraged AWS Batch and Amazon EKS to supercharge their research capabilities.
Improve engineering productivity using AWS Engineering License Management
This post was contributed by Eran Brown, Principal Engagement Manager, Prototyping Team, Vedanth Srinivasan, Head of Solutions, Engineering & Design, Edmund Chute, Specialist SA, Solution Builder, Priyanka Mahankali, Senior specialist SA, Emerging Domains For engineering companies, the cost of Computer Aided Design and Engineering (CAD/CAE) tools can as high as 20% of product development cost. […]
Optimizing compute-intensive tasks on AWS
Optimizing workloads for performance and cost-effectiveness is crucial for businesses of all sizes – and especially helpful for workloads in the cloud, where there are a lot of levers you can pull to tune how things run. AWS offers a vast array of instance types in Amazon Elastic Compute Cloud (Amazon EC2) – each with […]
Cross-account HPC cluster monitoring using Amazon EventBridge
Managing extensive HPC workflows? This post details how to monitor resource consumption without compromising security. Check it out for a customizable reference architecture that sends only relevant data to your monitoring account.
How Amazon’s Search M5 team optimizes compute resources and cost with fair-share scheduling on AWS Batch
In this post, we share how Amazon Search optimizes their use of accelerated compute resources using AWS Batch fair-share scheduling to schedule distributed deep learning workloads.
Improving NFL player health using machine learning with AWS Batch
In this post we’ll show you how the NFL used AWS to scale their ML workloads and produce the first comprehensive dataset of helmet impacts across multiple NFL seasons. They were able to reduce manual labor by 90% and the results beats human labelers in accuracy by 12%!
Deploying predictive models and simulations at scale using TwinFlow on AWS
AWS TwinFlow is an open source framework to build and deploy predictive models using heterogenous compute pipelines on AWS. In this post, we show the versatility of the framework with examples of engineering design, scenario analysis, systems analysis, and digital twins.
Streamlining distributed ML workflow orchestration using Covalent with AWS Batch
Complicated multi-step workflows can be challenging to deploy, especially when using a variety of high-compute resources. Covalent is an open-source orchestration tool that streamlines the deployment of distributed workloads on AWS resources. In this post, we outline key concepts in Covalent and develop a machine learning workflow for AWS Batch in just a handful of steps.
Building a 4x faster and more scalable algorithm using AWS Batch for Amazon Logistics
In this post, AWS Professional Services highlights how they helped data scientists from Amazon Logistics rearchitect their algorithm for improving the efficiency of their supply-chain by making better planning decisions. Leveraging best practices for deploying scalable HPC applications on AWS, the teams saw a 4X improvement in run time.
Running accurate, comprehensive, and efficient genomics workflows on AWS using Illumina DRAGEN v4.0
In this blog, we provide a walkthrough of running Illumina DRAGEN v4.0 genomic analysis pipelines on AWS, showing accuracy and efficiency, copy number analysis, structural variants, SMN callers, repeat expansion detection, and pharmacogenomics insights for complex genes. We also highlight some benchmarking results for runtime, cost, and concordance from the Illumina DRAGEN DNA sequencing pipeline.