AWS Compute Blog
Integrating Amazon MemoryDB for Redis with Java-based AWS Lambda
This post is written by Mansi Y Doshi, Consultant and Aditya Goteti, Sr. Lead Consultant.
Enterprises are modernizing and migrating their applications to the AWS Cloud to improve scalability, reduce cost, innovate, and reduce time to market new features. Legacy applications are often built with RDBMS as the only backend solution.
Modernizing legacy Java applications with microservices requires breaking down a single monolithic application into multiple independent services. Each microservice does a specific job and requires its own database to persist data, but one database does not fit all use cases. Modern applications require purpose-built databases catering to their specific needs and data models.
This post discusses some of the common use cases for one such data store, Amazon MemoryDB for Redis, which is built to provide durability and faster reads and writes.
Use cases
Modern tech stacks often begin with a backend that interacts with a durable database like MongoDB, Amazon Aurora, or Amazon DynamoDB for their data persistence needs.
But, as traffic volume increases, it often makes sense to introduce a caching layer like ElastiCache. This is populated with data by service logic each time a database read happens, such that the subsequent reads of the same data become faster. While ElastiCache is effective, you must manage and pay for two separate data sources for the same data. You must also write custom logic to handle the cache reads/writes besides the existing read/write logic used for durable databases.
While traditional databases like MySQL, Postgres and DynamoDB provide data durability at the cost of speed, transient data stores like ElastiCache trade durability for faster reads/writes (usually within microseconds). ElastiCache provides writes and strongly consistent reads on the primary node of each shard and eventually consistent reads from read replicas. There is a possibility that the latest data written to the primary node is lost during a failover, which makes ElastiCache fast but not durable.
MemoryDB addresses both these issues. It provides strong consistency on the primary node and eventual consistency reads on replica nodes. The consistency model of MemoryDB is like ElastiCache for Redis. However, in MemoryDB, data is not lost across failovers, allowing clients to read their writes from primaries regardless of node failures. Only data that is successfully persisted in the Multi-AZ transaction log is visible. Replica nodes are still eventually consistent. Because of its distributed transaction model, MemoryDB can provide both durability and microsecond response time.
MemoryDB is most ideal for services that are read-heavy and sensitive to latency, like configuration, search, authentication and leaderboard services. These must operate at microsecond read latency and still be able to persist the data for high availability and durability. Services like leaderboards, having millions of records, often break down the data into smaller chunks/batches and process them in parallel. This needs a data store that can perform calculations on the fly and also store results temporarily. Redis can process millions of operations per second and store temporary calculations for fast retrieval and also run other operations (like aggregations). Since Redis is single-threaded, from the command’s execution point of view, it also helps to avoid dirty writes and reads.
Another use case is a configuration service, where users store, change, and retrieve their configuration data. In large distributed systems, there are often hundreds of independent services interacting with each other using well-defined REST APIs. These services depend on the configuration data to perform specific actions. The configuration service must serve the required information at a low latency to avoid being a bottleneck for the other dependent services.
MemoryDB can read at microsecond latencies durably. It also persists data across multiple Availability Zones. It uses multi- Availability Zone transaction logs to enable fast failover, database recovery, and node restarts. You can use it as a primary database without the need to maintain another cache to lower the data access latency. This also reduces the need to maintain additional caching service, which further reduces cost.
These use cases are a good fit for using MemoryDB. Next, you see how to access, store, and retrieve data in MemoryDB from your Java-based AWS Lambda function.
Overview
This blog shows how to build an Amazon MemoryDB cluster and integrate it with AWS Lambda. Amazon API Gateway and Lambda can be paired together to create a client-facing application, which can be easier to maintain, highly scalable, and secure. Both are fully managed services with no need to provision or manage servers. They can be cost effective when compared to running the application on servers for workloads with long idle periods. Using Lambda authorizers you can also write custom code to control access to your API.
Walkthrough
The following steps show how to provision an Amazon MemoryDB cluster along with Amazon VPC, subnets, security groups and integrate it with a Lambda function using Redis/Jedis Java client. Here, the Lambda function is configured to connect to the same VPC where MemoryDB is provisioned. The steps include provisioning through an AWS SAM template.
Prerequisites
- Create an AWS account if you do not already have one and login.
- Configure your account and set up permissions to access MemoryDB.
- Java 8 or above
- Install Maven
- Java Client for Redis
- Install AWS SAM if you do not already have one
Creating the MemoryDB cluster
Refer to the serverless pattern for a quick setup and customize as required. The AWS SAM template creates VPC, subnets, security groups, the MemoryDB cluster, API Gateway, and Lambda.
To access the MemoryDB cluster from the Lambda function, the security group of the Lambda function is added to the security group of the cluster. The MemoryDB cluster is always launched in a VPC. If the subnet is not specified, the cluster is launched into your default Amazon VPC.
You can also use your existing VPC and subnets and customize the template accordingly. If you are creating a new VPC, you can change the CIDR block and other configuration values as needed. Make sure the DNS hostname and DNS Support of the VPC is enabled. Use the optional parameters section to customize your templates. Parameters enable you to input custom values to your template each time you create or update a stack.
Recommendations
As your workload requirements change, you might want to increase the performance of your cluster or reduce costs by scaling in/out the cluster. To improve the read/write performance, you can scale your cluster horizontally by increasing the number of read replicas or shards for read and write throughout, respectively.
To reduce cost in case the instances are over-provisioned, you can perform vertical scale-in by reducing the size of your cluster, or scale-out by increasing the size to overcome CPU bottlenecks/ memory pressure. Both vertical scaling and horizontal scaling are applied with no downtime and cluster restarts are not required. You can customize the following parameters in the memoryDBCluster as required.
NodeType: db.t4g.small
NumReplicasPerShard: 2
NumShards: 2
In MemoryDB, all the writes are carried on a primary node in a shard and all the reads are performed on the standby nodes. Identifying the right number of read replicas, type of nodes and shards in a cluster is crucial to get the optimal performance and to avoid any additional cost because of over-provisioning the resources. It’s recommended to always start with a minimal number of required resources and scale out as needed.
Replicas improve read scalability, and it is recommended to have at least two read replicas per shard but depending upon the size of the payload and for read heavy workloads, it might be more than two. Adding more read replicas than required does not give any performance improvement, and it attracts additional cost. The following benchmarking is performed using the tool Redis benchmark. The benchmarking is done only on GET requests to simulate a read heavy workload.
The metrics on both the clusters are almost the same with 10 million requests with 1kb of data payload per request. Increasing the size of the payload to 5kb and number of GET requests to 20 million, the cluster with two primary and two replicas could not process, whereas the second cluster processed successfully. To achieve the right sizing, load testing is recommended on the staging/pre-production environment with a similar load as production.
Creating a Lambda function and allow access to the MemoryDB cluster
In the lambda-redis/HelloWorldFunction/pom.xml file, add the following dependency. This adds the Java Jedis client to connect the MemoryDB cluster:
<dependency>
<groupId>redis.clients</groupId>
<artifactId>jedis</artifactId>
<version>4.2.0</version>
</dependency>
The simplest way to connect the Lambda function to the MemoryDB cluster is by configuring it within the same VPC where the MemoryDB cluster was launched.
To create a Lambda function, add the following code in the template.yaml file in the Resources section:
HelloWorldFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: HelloWorldFunction
Handler: helloworld.App::handleRequest
Runtime: java8
MemorySize: 512
Timeout: 900 #seconds
Events:
HelloWorld:
Type: Api
Properties:
Path: /hello
Method: get
VpcConfig:
SecurityGroupIds:
- !GetAtt lambdaSG.GroupId
SubnetIds:
- !GetAtt privateSubnetA.SubnetId
- !GetAtt privateSubnetB.SubnetId
Environment:
Variables:
ClusterAddress: !GetAtt memoryDBCluster.ClusterEndpoint.Address
Java code to access MemoryDB
- In your Java class, connect to Redis using Jedis client:
HostAndPort hostAndPort = new HostAndPort(System.getenv("ClusterAddress"), 6379); JedisCluster jedisCluster = new JedisCluster(Collections.singleton(hostAndPort), 5000, 5000, 2, null, null, new GenericObjectPoolConfig (), true);
- You can now perform set and get operations on Redis as follows
jedisCluster.set(“test”, “value”) jedisCluster.get(“test”)
JedisCluster maintains its own pool of connections and takes care of connection teardown. But you can also customize the configuration for closing idle connections using the GenericObjectPoolConfig object.
Clean Up
To delete the entire stack, run the command “sam delete”.
Conclusion
In this post, you learn how to provision a MemoryDB cluster and access it using Lambda. MemoryDB is suitable for applications requiring microsecond reads and single-digit millisecond writes along with durable storage. Accessing MemoryDB through Lambda using API Gateway reduces the further need for provisioning and maintaining servers.
For more serverless learning resources, visit Serverless Land.