AWS Database Blog

Category: Customer Solutions

How Monzo Bank reduced cost of TTL from time series index tables in Amazon Keyspaces

At Monzo, we use Amazon Keyspaces (for Apache Cassandra) as our main operational database. Today, we store over 350 TB of data across more than 2,000 tables in Amazon Keyspaces, handling over 2,000,000 reads and 100,000 writes per second at peak. In this post, we share how we used a different mechanism for row expiry than the Time to Live setting in Amazon Keyspaces to reduce our operating costs for an index while preserving its semantics.

FundApps’s journey from SQL Server to Amazon Aurora Serverless v2 with Babelfish

FundApps, founded in 2010, is one of the pioneers in the Regulatory Technology (RegTech) space, which includes compliance monitoring and reporting. FundApps decided to rearchitect their environment and transform it to a cloud-based architecture on AWS to better support the growth of their business. For more information, see Faster, cheaper, greener: Pick three — FundApps modernization journey. In this post, we focus on the persistence layer of the FundApps regulatory data service. You learn how FundApps improved the service scalability, reduced cost, and streamlined operations by migrating from SQL Server database to a cloud-centered solution combining Amazon Aurora Serverless v2 with Babelfish for Aurora PostgreSQL and Amazon Simple Storage Service (Amazon S3).

How the Amazon TimeHub team designed a recovery and validation framework for their data replication framework: Part 4

With AWS DMS, you can use data validation to make sure your data was migrated accurately from the source to the target. If you enable validation for a task, AWS DMS begins comparing the source and target data immediately after a full load is performed for a table. In this post, we describe the custom framework we built on top of AWS DMS validation tasks to maintain data integrity as part of the ongoing replication between source and target databases.

How the Amazon TimeHub team handled disruption in AWS DMS CDC task caused by Oracle RESETLOGS: Part 3

In How the Amazon TimeHub team designed resiliency and high availability for their data replication framework: Part 2, we covered different scenarios handling replication failures at the source database (Oracle), AWS DMS, and target database (Amazon Aurora PostgreSQL-Compatible Edition). As part of our resilience scenario testing, when there was a failover between the Oracle primary database instance and primary standby instances, and the database opened up with RESETLOGS, AWS DMS couldn’t automatically read the new set of logs in case of a new incarnation. In this post, we dive deep into the solution the Amazon TimeHub team used for detecting such a scenario and recovering from it. We then describe the post-recovery steps to validate and correct data discrepancies caused due to the failover scenario.

How the Amazon TimeHub team designed resiliency and high availability for their data replication framework: Part 2

In How the Amazon Timehub team built a data replication framework using AWS DMS: Part 1, we covered how we built a low-latency replication solution to replicate data from an Oracle database using AWS DMS to Amazon Aurora PostgreSQL-Compatible Edition. In this post, we elaborate on our approach to address resilience of the ongoing replication between source and target databases.

Scaling to 70M users: How Flo Health optimized Amazon DynamoDB for cost and performance

Flo is the largest app in the Health and Fitness category worldwide, with 70 million monthly active users. In this post, we explain best practices Flo implemented to scale to more than 70 million monthly active users while achieving 60% cost efficiency with Amazon DynamoDB.

How Firmex used AWS SCT and AWS DMS to move 65,000 on-premises Microsoft SQL Server databases to an Amazon Aurora PostgreSQL cluster

This post is co-authored with Eric Boyer and Maria Hristova of Firmex. Firmex is a leading Virtual Data Room provider with more than 20,000 new rooms opened every year. In this post, we discuss how and why Firmex migrated 65,000 databases heterogeneously from their on-premises SQL Server to Amazon Aurora PostgreSQL-Compatible Edition.

How Channel Corporation modernized their architecture with Amazon DynamoDB, Part 2: Streams

Channel Corporation is a B2B software as a service (SaaS) startup that operates the all-in-one artificial intelligence (AI) messenger Channel Talk. In Part 1 of this series, we introduced our motivation for NoSQL adoption, technical problems with business growth, and considerations for migration from PostgreSQL to Amazon DynamoDB. In this post, we share our experience integrating with other services to solve areas that couldn’t be addressed with DynamoDB alone.

How Channel Corporation modernized their architecture with Amazon DynamoDB, Part 1: Motivation and approaches

Channel Corporation is a B2B software as a service (SaaS) startup that operates the all-in-one artificial intelligence (AI) messenger Channel Talk. This two-part blog series starts by presenting the motivation and considerations for migrating from RDBMS to NoSQL. In this post, we discuss the motivation behind Channel Corporation’s architecture modernization with Amazon DynamoDB, the reason behind choosing DynamoDB, and the four major considerations before migrating from Amazon Relational Database Service (Amazon RDS) for PostgreSQL.