AWS Compute Blog
Building a serverless pipeline to deliver reliable messaging
This post is written by Jeff Harman, Senior Prototyping Architect, Vaibhav Shah, Senior Solutions Architect and Erik Olsen, Senior Technical Account Manager.
Many industries are required to provide audit trails for decision and transactional systems. AI assisted decision making requires monitoring the full inputs to the decision system in near real time to prevent fraud, detect model drift, and discrimination. Modern systems often use a much wider array of inputs for decision making, including images, unstructured text, historical values, and other large data elements. These large data elements pose a challenge to traditional audit systems that deal with relatively small text messages in structured formats. This blog shows the use of serverless technology to create a reliable, performant, traceable, and durable pipeline for audit processing.
Overview
Consider the following four requirements to develop an architecture for audit record ingestion:
- Audit record size: Store and manage large payloads (256k – 6 MB in size) that may be heterogeneous, including text, binary data, and references to other storage systems.
- Audit traceability: The data stored has full traceability of the payload and external processes to monitor the process via subscription-based events.
- High Performance: The time required for blocking writes to the system is limited to the time it takes to transmit the audit record over the network.
- High data durability: Once the system sends a payload receipt, the payload is at very low risk of loss because of system failures.
The following diagram shows an architecture that meets these requirements and models the flow of the audit record through the system.
The primary source of latency is the time it takes for an audit record to be transmitted across the network. Applications sending audit records make an API call to an Amazon API Gateway endpoint. An AWS Lambda function receives the message and an Amazon ElastiCache for Redis cluster provides a low latency initial storage mechanism for the audit record. Once the data is stored in ElastiCache, the AWS Step Functions workflow then orchestrates the communication and persistence functions.
Subscribers receive four Amazon Simple Notification Service (Amazon SNS) notifications pertaining to arrival and storage of the audit record payload, storage of the audit record metadata, and audit record archive completion. Users can subscribe an Amazon Simple Queue Service (SQS) queue to the SNS topic and use fan out mechanisms to achieve high reliability.
- The Ingest Message Lambda function sends an initial receipt notification
- The Message Archive Handler Lambda function notifies on storage of the audit record from ElastiCache to Amazon Simple Storage Service (Amazon S3)
- The Message Metadata Handler Lambda function notifies on storage of the message metadata into Amazon DynamoDB
- The Final State Aggregation Lambda function notifies that the audit record has been archived.
Any failure by the three fundamental processing steps: Ingestion, Data Archive, and Metadata Archive triggers a message in an SQS Dead Letter Queue (DLQ) which contains the original request and an explanation of the failure reason. Any failure in the Ingest Message function invokes the Ingest Message Failure function, which stores the original parameters to the S3 Failed Message Storage bucket for later analysis.
The Step Functions workflow provides orchestration and parallel path execution for the system. The detailed workflow below shows the execution flow and notification actions. The transformer steps convert the internal data structures into the format required for consumers.
Data structures
There are types three events and messages managed by this system:
- Incoming message: This is the message the producer sends to an API Gateway endpoint.
- Internal message: This event contains the message metadata allowing subsequent systems to understand the originating message producer context.
- Notification message: Messages that allow downstream subscribers to act based on the message.
Solution walkthrough
The message producer calls the API Gateway endpoint, which enforces the security requirements defined by the business. In this implementation, API Gateway uses an API key for providing more robust security. API Gateway also creates a security header for consumption by the Ingest Message Lambda function. API Gateway can be configured to enforce message format standards, see Use request validation in API Gateway for more information.
The Ingest Message Lambda function generates a message ID that tracks the message payload throughout its lifecycle. Then it stores the full message in the ElastiCache for Redis cache. The Ingest Message Lambda function generates an internal message with all the elements necessary as described above. Finally, the Lambda function handler code starts the Step Functions workflow with the internal message payload.
If the Ingest Message Lambda function fails for any reason, the Lambda function invokes the Ingestion Failure Handler Lambda function. This Lambda function writes any recoverable incoming message data to an S3 bucket and sends a notification on the Ingest Message dead letter queue.
The Step Functions workflow then runs three processes in parallel.
- The Step Functions workflow triggers the Message Archive Data Handler Lambda function to persist message data from the ElastiCache cache to an S3 bucket. Once stored, the Lambda function returns the S3 bucket reference and state information. There are two options to remove the internal message from the cache. Remove the message from cache immediately before sending the internal message and updating the ElastiCache cache flag or wait for the ElastiCache lifecycle to remove a stale message from cache. This solution waits for the ElastiCache lifecycle to remove the message.
- The workflow triggers the Message Metadata Handler Lambda function to write all message metadata and security information to DynamoDB. The Lambda function replies with the DynamoDB reference information.
- Finally, the Step Functions workflow sends a message to the SNS topic to inform subscribers that the message has arrived and the data persistence processes have started.
After each of the Lambda functions’ processes complete, the Lambda function sends a notification to the SNS notification topic to alert subscribers that each action is complete. When both Message Metadata and Message Archive Lambda functions are done, the Final Aggregation function makes a final update to the metadata in DynamoDB to include S3 reference information and to remove the ElastiCache Redis reference.
Deploying the solution
Prerequisites:
- AWS Serverless Application Model (AWS SAM) is installed (see Getting started with AWS SAM)
- AWS User/Credentials with appropriate permissions to run AWS CloudFormation templates in the target AWS account
- Python 3.8 – 3.10
- The AWS SDK for Python (Boto3) is installed
- The requests python library is installed
The source code for this implementation can be found at https://github.com/aws-samples/blog-serverless-reliable-messaging
Installing the Solution:
- Clone the git repository to a local directory
git clone https://github.com/aws-samples/blog-serverless-reliable-messaging.git
- Change into the directory that was created by the clone operation, usually
blog_serverless_reliable_messaging
- Execute the command:
sam build
- Execute the command:
sam deploy –-guided
. You are asked to supply the following parameters:- Stack Name: Name given to this deployment (example: serverless-messaging)
- AWS Region: Where to deploy (example: us-east-1)
- ElasticacheInstanceClass: EC2 cache instance type to use with (example: cache.t3.small)
- ElasticReplicaCount: How many replicas should be used with ElastiCache (recommended minimum: 2)
- ProjectName: Used for naming resources in account (example: serverless-messaging)
- MultiAZ: True/False if multiple Availability Zones should be used (recommend: True)
- The default parameters can be selected for the remainder of questions
Testing:
Once you have deployed the stack, you can test it through the API gateway endpoint with the API key that is referenced in the deployment output. There are two methods for retrieving the API key either via the AWS console (from the link provided in the output – ApiKeyConsole
) or via the AWS CLI (from the AWS CLI reference in the output – APIKeyCLI
).
You can test directly in the Lambda service console by invoking the ingest message function.
A test message is available at the root of the project test_message.json for direct Lambda function testing of the Ingest function.
- In the console navigate to the Lambda service
- From the list of available functions, select the “<project name> -IngestMessageFunction-xxxxx” function
- Under the “Function overview” select the “Test” tab
- Enter an event name of your choosing
- Copy and paste the contents of test_message.json into the “Event JSON” box
- Click “Save” then after it has saved, click the “Test”
- If successful, you should see something similar to the below in the details:
{ "isBase64Encoded": false, "statusCode": 200, "headers": { "Access-Control-Allow-Headers": "Content-Type", "Access-Control-Allow-Origin": "*", "Access-Control-Allow-Methods": "OPTIONS,POST" }, "body": "{\"messageID\": \"XXXXXXXXXXXXXX\"}" }
- In the S3 bucket “
<project name>-s3messagearchive-xxxxxx
“, find the payload of the original json with a key based on the date and time of the script execution, e.g.:YEAR/MONTH/DAY/HOUR/MINUTE
with a file name of themessageID
- In a DynamoDB table named
metaDataTable
, you should find a record with amessageID
equal to themessageID
from above that contains all of the metadata related to the payload
A python script is included with the code in the test_client folder
- Replace the <Your API key key here> and the <Your API Gateway URL here (IngestMessageApi)> values with the correct ones for your environment in the test_client.py file
- Execute the test script with Python 3.8 or higher with the requests package installed
Example execution (from main directory of git clone):
python3 -m pip install -r ./test_client/requirements.txt
python3 ./test_client/test_client.py
- Successful output shows the messageID and the header JSON payload:
{ "messageID": " XXXXXXXXXXXXXX" }
- In the S3 bucket “
<project name>-s3messagearchive-xxxxxx
“, you should be able to find the payload of the original json with a key based on the date and time of the script execution, e.g.: YEAR/MONTH/DAY/HOUR/MINUTE with a file name of themessageID
- In a DynamoDB table named
metaDataTable
, you should find a record with amessageID
equal to themessageID
from above that contains all of the meta data related to the payload
Conclusion
This blog describes architectural patterns, messaging patterns, and data structures that support a highly reliable messaging system for large messages. The use of serverless services including Lambda functions, Step Functions, ElastiCache, DynamoDB, and S3 meet the requirements of modern audit systems to be scalable and reliable. The architecture shared in this blog post is suitable for a highly regulated environment to store and track messages that are larger than typical logging systems, records sized between 256k and 6MB. The architecture serves as a blueprint that can be extended and adapted to fit further serverless use cases.
For serverless learning resources, visit Serverless Land.