AWS HPC Blog
Building deep learning models for geoscience using MATLAB and NVIDIA GPUs on Amazon EC2 (Part 2 of 2)
This is the second of a two-part post.Part 1 discussed the workflow for developing AI models using MATLAB for seismic interpretation. Today, we will discuss the various compute resources leveraged from AWS and NVIDIA for developing the models.
Building deep learning models for geoscience using MATLAB and NVIDIA GPUs on Amazon EC2 (Part 1 of 2)
In this blog post, we discuss how geoscientists can use shallow RNN-based algorithms with MATLAB to automatically recognize distinct geologic features in seismic images. We discuss the workflow for developing the AI models using MATLAB for seismic interpretation. In a second post will introduce the various compute resources leveraged from AWS and NVIDIA for developing the models.
Getting the Best Price Performance for Numerical Weather Prediction Workloads on AWS
In this post, we will provide an overview of Numerical Weather Prediction (NWP) workloads, and the AWS HPC-optimized services for it. We’ll test three popular NWP codes: WRF, MPAS, and FV3GFS.
AI-based drug discovery with Atomwise and WEKA Data Platform
Drug discovery is an expensive proposition, with a $2.6 billion cost over 10 years and just a 12% success rate. AI promises to significantly improve the success rate by finding small molecule hits for undruggable targets. On the forefront of using AI in drug discovery is Atomwise, with its AtomNet® platform. In this blog, we will lay out the challenges of the drug discovery process, and show how AI/ML startups are solving these challenges using solutions from Atomwise, AWS, and WEKA.
Data Science workflows at insitro: how redun uses the advanced service features from AWS Batch and AWS Glue
Matt Rasmussen, VP of Software Engineering at insitro, expands on his first post on redun, insitro’s data science tool for bioinformatics, to describe how redun makes use of advanced AWS features. Specifically, Matt describes how AWS Batch’s Array Jobs is used to support workflows with large fan-out, and how AWS Glue’s DynamicFrame is used to run computationally heterogenous workflows with different back-end needs such as Spark, all in the same workflow definition.
Data Science workflows at insitro: using redun on AWS Batch
Matt Rasmussen, VP of Software Engineering at insitro describes their recently released, open-source data science framework, redun, which allows data scientists to define complex scientific workflows that scale from their laptop to large-scale distributed runs on serverless platforms like AWS Batch and AWS Glue. I this post, Matt shows how redun lends itself to Bioinformatics workflows which typically involve wrapping Unix-based programs that require file staging to and from object storage. In the next blog post, Matt describes how redun scales to large and heterogenous workflows by leveraging AWS Batch features such as Array Jobs and AWS Glue features such as Glue DynamicFrame.
Creating a digital map of COVID-19 virus for discovery of new treatment compounds
Quantum physics and high-performance computing have slashed research times for a consortium of researchers led by Qubit Pharmaceuticals. This post describes the discovery of chemical substances that may lead to new COVID-19 treatments in only six months using cloud technology.
How to Arm a world-leading forecast model with AWS Graviton and Lambda
The Met Office is the UK’s National Meteorological Service, providing 24×7 world-renowned scientific excellence in weather, climate and environmental forecasts and severe weather warnings for the protection of life and property. They provide forecasts and guidance for the public, to our government and defence colleagues as well as the private sector. As an example, if you’ve been on a plane over Europe, Middle East, or Africa; that plane took off because the Met Office (as one of two World Aviation Forecast Centres) provided a forecast. This article explains one of the ways they use AWS to collect these observations, which has freed them to focus more on top quality delivery for their customers.
Join us for our HPC “Speeds n’ Feeds” event on Feb. 9
It’s often difficult to keep track of all the announcements AWS is making around HPC. Come and join us on Feb. 9th for a quick overview of the latest and greatest AWS HPC products and services launched over the past year. You will hear directly from the AWS HPC engineers and product managers who have built these exciting new offerings.
Benchmarking the NVIDIA Clara Parabricks germline pipeline on AWS
This blog provides an overview of NVIDIA’s Clara Parabricks along with a guide on how to use Parabricks within the AWS Marketplace. It focuses on germline analysis for whole genome and whole exome applications using GPU accelerated bwa-mem and GATK’s HaplotypeCaller.