AWS for Industries
Acoustic gains 10X throughput rates modernizing their Send Engine on AWS
It is critical for marketing technology campaign platforms to send rich content, dynamically sourced, from their customer’s datasets to drive hyper personalized and relevant messaging to customers. The ability to seamlessly scale during multiple critical times of the year to meet high customer data volume demand, in real-time, can be resource challenging. Acoustic’s modernized Send Engine does this with undeniable gains in throughput rates.
About Acoustic
Acoustic is a global marketing and customer engagement technology company committed to connecting the dots from campaign to conversion. Acoustic leverages in-depth behavioral insights to uncover every interaction throughout the customer journey, enabling hyper-personalized digital experiences. They completed their journey to the Amazon Web Services (AWS) Cloud by migrating 6,300 servers and 8 full stack (Service-as-a-Software) SaaS applications to AWS in only 10 months in 2022. Adoption of the AWS Cloud enabled Acoustic to take advantage of new technologies and abilities to scale that were not available within on-premises data centers.
In this blog post, we’ll review how Acoustic modernized their strategic Send Engine application leveraging AWS cloud-native services and built a scalable, cost effective and resilient solution. The Send Engine is Acoustic’s omni-channel application for personalizing, sending and tracking emails, text messages, mobile pushes and WhatsApp messages to Acoustic’s clients’ customers. By modernizing, Acoustic not only can scale with their client demand needs, but additionally can rapidly integrate with future channels where Acoustic’s clients want to reach their audiences.
Send Engine: Previous Architecture
Acoustic’s previous Send Engine technology served its purpose well, but as data volumes grew, the need for hyper-personalized messaging and flexibility of channel integration evolved, requiring a new architecture. The application was created around a Java code base accessing a traditional relational SQL datastore. Since the same datastore was used for all campaign capabilities, (including segmentation, audience management, and marketing automation), there was a limit on the number of connections and code instances that could access the database. This created a fundamental constraint for the send execution. The maximum throughput observed was ~5-20 million transactions per hour based on the load of the datastore, which was not meeting the growing business demand.
Diagram-1: Send Engine Prior Architecture
Key Drivers to modernize the Send Engine
In 2021/2022, Acoustic undertook a technology transformation of their Send Engine application, driving to support scaling for volatile customer load patterns with optimized operations. Acoustic’s Send Engine application was not technically feasible to update and doing so would have been cost prohibitive from a perspective of managing non-scalable infrastructure capacity resources. Acoustic needed to take advantage of easily scalable services where they could control cost-to-performance profiles depending on the workloads. This required a re-envisioning of their Send Engine application to take advantage of the available AWS cloud-native solutions and services.
Send Engine: New architecture
In order to address the constraints from the prior architecture, Acoustic moved to a cloud-native, event-based architecture to provide flexibility for scaling of real-time events. Acoustic leveraged AWS services including Amazon Elastic Kubernetes Service (Amazon EKS), Amazon ElastiCache for Redis, Amazon Kinesis, AWS Lambda, Amazon Simple Storage Service (Amazon S3) along with Karpenter for node scaling and KEDA for Kubernetes pod scaling based on events.
Acoustic implemented a Kinesis stream-based auto-scaler (Diagram-1). The system calculates the number of records to be processed coming from Amazon S3 and inputs to ElastiCache. Lambda then checks the calculated count (leveraging Amazon CloudWatch) every minute. Based on current scaling count, the expected load, and any cooldown period, it will either do nothing or scale all streams up or down as needed. The Lambda logic also scales Kinesis streams for a limited number of times in a 24-hour time window.
Diagram-2: Send Engine Auto-Scaling Architecture
Diagram-3: Transient State Scaling
Implementation
Diagram-3 illustrates the transient state auto scaling of shards over time. The stream starts with a minimum scaled down setting of five shards. When a 34M send message is evaluated, auto-scaler scales up an additional two shards to handle the demand. Consecutively, to continue handling the existing 34M, and additional 15M send messages in a new interval, the auto-scaler scales up an additional four shards. At the end of the period, when the combined messages count goes down to 26M messages the streams get scaled down to three shards.
Now that the Kinesis streams can scale up and down, Acoustic needed to adjust the Amazon EKS pods to the number of shards. In order to scale the Amazon EKS pods quickly with the current number of shards, a KEDA event-based pod scaler was introduced which used the Kinesis shard number as a metric.
To achieve real savings on the Amazon EKS side, there was also a need to scale the Amazon Elastic Compute Cloud (Amazon EC2) instances up or down. Since, in the case of the Amazon EC2 scaling, there was a need to scale up new Amazon EC2 nodes in under a minute the default Amazon EKS horizontal auto-scaler was too slow, so Karpenter was leveraged. This enabled Acoustic to assign specific node groups to the Send Engine and tailor the node types, and scaling, for just this application. A reference architecture of the Send Engine after modernization is presented in Diagram-4.
Diagram-4: Modernized Send Engine Reference Architecture
With new patterns in place, the maximum throughput increased 10X to >100 million transactions per hour. Among the new data access patterns implemented, Acoustic leveraged Amazon S3 and Amazon DynamoDB to overcome scaling constraints of a single relational database.
Conclusion
In this blog post, we outlined how Acoustic’s modernization pathway leveraged AWS cloud native services and developed patterns to build a scalable and resilient Send Engine solution to meet business volatility and growth. Adopting new patterns and consolidating platform technologies that can easily be leveraged across product lines accelerated Acoustic’s business priorities and agility with the cloud.
If you want to get started with application modernization, see Solutions for Migration and Modernization Page, where you’ll find a broad range of examples to help you or contact an AWS Representative anytime.