Building serverless applications
Amazon Aurora is a MySQL and PostgreSQL-compatible relational database that combines the performance and availability of traditional enterprise databases with the simplicity and cost-effectiveness of open source databases. Amazon Aurora Serverless v2 is an on-demand, auto-scaling configuration for Amazon Aurora (MySQL-compatible and PostgreSQL-compatible editions), where the database will automatically scale capacity up or down based on your application's needs. It enables you to run your database in the cloud without managing any database instances. It's a simple, cost-effective option for infrequent, intermittent, or unpredictable workloads.
In this tutorial, you will learn how to create a serverless message processing application with Amazon Aurora Serverless v2 (PostgreSQL-compatible edition), Data API for Aurora Serverless v2, Amazon Lambda, and Amazon Simple Notification Service (SNS). The tutorial will provide step-by-step instructions to create an Aurora Serverless v2 database cluster, use Data API to connect it with an Amazon Lambda funtion that consumes messages from Amazon SNS, and stores the messages in Aurora Serverless v2.
About this Tutorial | |
---|---|
Time | 10-20 minutes |
Cost | Less than $1 |
Use Case | Databases |
Products | Amazon Aurora, Amazon SNS, AWS Lambda |
Level | 100 |
Last Updated | October 17, 2024 |
Step 1: Create your Aurora serverless database
1.1 — Open a browser and navigate to Amazon RDS console. If you already have an AWS account, login to the console. Otherwise, create a new AWS account to get started.
On the top right corner, select the region where you want to launch the Aurora DB cluster.
Already have an account? Log in to your account
Already have an account? Log in to your account
1.2 — Click on "Create database" in the Amazon Aurora window.
You'll find a "Create database" button on the RDS Dashboard page, and on the Databases page that lists your Aurora clusters and RDS instances.
The "Create database" workflow includes many options that you don't need to worry about for this tutorial. We'll only call out the options where you need to change the default setting or enter a specific value.
Database creation method
1.3 — Select "Standard create"
For this tutorial, we'll need to select some specific options. Thus, we choose "Standard create" to be able to pick from every available setting.
Engine options
1.4 — On Database engine, select "Aurora (PostgreSQL Compatible)".
For this exercise, we'll use the default Aurora PostgreSQL version, which is 15.4 as of the time this tutorial is published. Here's the engine to choose:
And here's the version. Any higher version is OK too.
Database features
1.5 — Select "Dev/Test" under Templates.
Settings
1.6 — Choose an identifier for your Aurora DB cluster
For the examples in this tutorial, we'll use "MyClusterName" for this identifier.
Credentials settings
1.7 — Select a username and password for your database.
For real world use, you would use robust credential management such as Secrets Manager, and having the system generate a strong password for you. For this tutorial, we'll use the simplest type of credential management because you'll delete the Aurora cluster at the end.
First pick "Self managed". Then enter a password of your choice for "Master password" and "Confirm master password".
Instance configuration
Choose "Serverless v2" for the DB instance class. Doing so brings up fields where you can specify the minimum and maximum capacity of each DB instance in your cluster. For this tutorial, we'll use a single DB instance for simplicity.
For "Capacity range", enter 0.5 for "Minimum capacity (ACUs)" and 16 for "Maximum capacity (ACUs)". This will let your compute capacity scale between 1 GB of RAM and 32 GB, as needed to handle the current level of activity in the database.
Availability and durability
For "Multi-AZ deployment", choose "Don't create an Aurora replica". For this exercise using a short-lived dev/test cluster, it's not important to have a second DB instance as a standby server.
Connectivity
1.8 — Select the VPC where you want to create the database.
The virtual private cloud (VPC) defines a set of IP addresses that you can use for related resources. It provides an extra layer of security for Aurora clusters that you designate as nonpublic. Note that once created, a database can't be migrated to a different VPC.
1.9 — Create a new DB subnet group
The DB subnet group defines how the storage for the Aurora cluster is spread across three Availability Zones (AZs) for durability. That way, the data is safe even if the DB instances experience issues, or even in the rare case of an AZ-wide outage. When you create a new VPC, you'll need a new DB subnet group to go along with it.
For this tutorial, we also turn off public access to the Aurora cluster. That way, any Linux servers, client applications, or other resources outside the VPC can't access the IP addresses of the Aurora cluster at all. Later, we'll use the RDS Query Editor to connect to the Aurora cluster and run SQL statements without needing to set up a client system inside the VPC.
1.10 — On VPC security group, select "create new".
The VPC security group defines which IP addresses are allowed to connect to which ports for the Aurora cluster. Again, we'll set up restrictive rules to minimize how many client systems can access the cluster, even from inside the VPC. And again, we'll use the Query Editor so that you, the owner of the cluster, can run SQL statements on it without extensive network setup.
For the new VPC security group name, type "MyClusterGroup".
If you already have a security group that allows incoming TCP connections on port 5432, you can choose it instead of creating a new one.
1.11 — Enable the Data API.
The RDS Data API lets you submit SQL statements to an Aurora cluster and get back the results in your application, without the need to configure connectivity settings and database drivers. Enabling the Data API for an Aurora cluster lets you use the RDS Query Editor with that cluster.
Monitoring
1.12 - Make sure "Turn on Performance Insights" is unchecked
This is a valuable monitoring feature that you'll likely use in your own deployments. However, it adds a little continuous activity to the cluster that can prevent Aurora Serverless v2 from scaling down to very low capacity. Thus, for this exercise we'll turn it off.
Review and create
After a quick review of all the fields in the form, you can proceed.
1.13 — Click on "Create database".
Retrieve the Cluster ARN
1.14 — Click on the Aurora cluster name.
1.15 — In the "Configuration" tab, copy the Cluster ARN and keep it handy. You will need it later.
Connect to the database
Before you being these steps, wait until both the Aurora cluster and associated DB instance have the status Available in the RDS console. That's when you can connect and run SQL operations.
1.16 — Open the left panel and click on "Query Editor".
1.17 — Enter connection details for Query Editor
For Database instance or cluster, select "myclustername".
For Database username, choose "Enter new database credentials".
Enter "postgres" as the database username, and input the database password you created earlier.
Then enter "postgres" for the database name.
Finally, click on "Connect to database".
1.18 — You can now click "Run" and execute the sample query.
1.19 — Create a database by running the following query:
CREATE DATABASE tutorial;
1.20 — Click on "Change database".
1.21 — Change the database to the one you just created.
Now under "Database username", you can select the postgres user to log in with those saved credentials. This time, for the name of the database, enter "tutorial".
1.22 — Create a table with this query:
CREATE TABLE sample_table(received_at TIMESTAMP, message VARCHAR(255));
By connecting to the database with the Query Editor, a Secret is created that you will use later on in your Lambda function. Leave this tab open, as you will need to run some queries at the end of the tutorial.
Copy the secret ARN
Open a new tab and head to the AWS Secrets Manager. Then follow the steps below to retrieve the Secret ARN.
1.23 — Find the secret containing the "RDS database postgres credentials for name_of_your_cluster".
1.24 — Click on the Secret name, then copy the Secret ARN and keep it handy.
You'll use this Amazon Resource Name to connect this set of credentials with the associated Aurora cluster. That way, you don't have to type, copy, or write down the user name and password when you're accessing the Aurora cluster through the Query Editor or a Lambda function.
Step 2: Configure permissions
Open a new tab and head to the Roles page in the AWS IAM console. Then follow the steps below to create the IAM role for the Lambda function and attach required IAM policies.
2.1 — Choose "Create role".
2.2 — Under Trusted entity type, choose "AWS service".
2.3 — Under "Use case", choose Lambda and click on Next.
2.4 — Select the AWSLambdaBasicExecutionRole and AmazonRDSDataFullAccess managed policies, then click on Next.
2.5 — Enter the Role name AWSLambdaFunctionAuroraTutorial and then click on "Create role".
2.6 — Now the Roles screen will show a banner explaining that the role was successfully created.
Step 3 : Create your AWS Lambda function
Head to the AWS Lambda functions page in the AWS management console. Then follow the steps below to create the Lambda function.
3.1 — Choose Create function
3.2 — Select Author from scratch.
3.3 — In the Basic information pane, for Function name enter aurora-serverless-v2-test.
3.4 — For Runtime, choose Python 3.12
3.5 — Leave architecture set to x86_64
3.6 — Under Permissions, expand Change default execution role
Choose Use an existing role and from the dropdown select the IAM role AWSLambdaFunctionAuroraTutorial that you created earlier. Then click on Create function.
3.7 — Delete the code in the lambda_function.py file and in its place copy this sample code.
Replace the cluster_arn and secret_arn values with the Cluster ARN and Secret ARN values you obtained in previous steps.
###############################################################################
# Copyright 2024 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Permission is hereby granted, free of charge, to any person obtaining a copy of this
# software and associated documentation files (the "Software"), to deal in the Software
# without restriction, including without limitation the rights to use, copy, modify,
# merge, publish, distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
# INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
# PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
# HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
#
#
# For more information, see:
# https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/data-api.html
#
# Prerequisites:
# - Required permissions to Secrets Manager and RDS Data API
# - An Aurora DB cluster to do the database processing and store the results
#
###############################################################################
import json
import os
import os.path
import sys
# Required imports
import botocore
import boto3
# Imports for your app
from datetime import datetime
# ---------------------------------------------------------------------------
# Update these variables with your cluster and secret ARNs
cluster_arn = 'arn:aws:rds:replace_with_real_cluster_arn'
secret_arn = 'arn:aws:secretsmanager:replace_with_real_secrets_manager_arn'
# ---------------------------------------------------------------------------
def lambda_handler(event, context):
if 'Records' not in event or 'Sns' not in event['Records'][0]:
print('Not an SNS event!')
print(str(event))
return
for record in event['Records']:
call_rds_data_api(record['Sns']['Timestamp'], record['Sns']['Message'])
def call_rds_data_api(timestamp, message):
rds_data = boto3.client('rds-data')
sql = """
INSERT INTO sample_table(received_at, message)
VALUES(TO_TIMESTAMP(:time, 'YYYY-MM-DD"T"HH24:MI:SS.MSZ'), :message)
"""
param1 = {'name':'time', 'value':{'stringValue': timestamp}}
param2 = {'name':'message', 'value':{'stringValue': message}}
param_set = [param1, param2]
response = rds_data.execute_statement(
resourceArn = cluster_arn,
secretArn = secret_arn,
database = 'tutorial',
sql = sql,
parameters = param_set)
print(str(response));
3.8 - Deploy your Lambda Function by clicking on Deploy.
Step 4: Create an Amazon SNS topic
Your Lambda Function will process messages from Amazon Simple Notification Service (SNS), which offers pub/sub messaging for microservices and serverless applications.
In a new tab, visit the SNS Dashboard and follow these instructions:
4.1 — In the "Create topic" panel, enter aurora-lambda-sns-test in the "Topic name" field. Then click on "Next step".
4.2 — Leave all the default values and click on "Create topic".
You will see a green banner indicating that the topic was successfully created.
4.3 — Copy the SNS ARN and keep it handy.
Keep this tab open, as you will use it to publish a message once the Lambda Function is configured to read from the topic you created.
Step 5: Subscribe AWS Lambda Function to Amazon SNS topic
Go to the AWS Lambda Management Console and follow these instructions:
5.1 — Click on the name of your Lambda Function you created in step 4.
Click on "Add trigger".
5.2 — Type "SNS" and select the "SNS" services from the dropdown menu.
5.3 — In the "SNS Topic" field, enter the SNS ARN.
Then click on "Add".
Step 6: Publish test message
Go back to the SNS Dashboard and follow these instructions:
6.1 — Click on the name of the topic you created. Then click on "Publish message".
6.2 — Enter any value for the “Subject” field.
6.3 — Enter any value for the “Body” field.
6.4 — Scroll down and click on "Publish message".
Once the message is published, your Lambda Function will consume it and process it. In the next section, you will verify how the data was written to your Aurora database.
Verify database changes
6.5 — Go to the tab where you left the Query Editor open, connected to the tutorial database.
If you closed that tab, visit the RDS Dashboard and connect to the "tutorial" database in the Query Editor, as explained earlier.
6.6 — Select all the records from sample_table:
Run the following SQL query in the Query Editor window.
SELECT * FROM sample_table;
6.7 — Click on "Run" and scroll down to see the results.
All should be working now. You can experiment by changing the messages you send via SNS, or you can alter the lambda_handler function any way you want.
Step 7: Cleanup
To finish this tutorial, you will learn how to delete your Aurora DB cluster when it's not needed anymore, along with the Lambda Function, the SNS topic, the Secret for connecting to the database, and any other leftovers.
7.1 — Go to the AWS Lambda Management Console and select your Lambda Function.
Click on "Actions > Delete".
7.2 — Visit the SNS Dashboard and click on "Topics" on the left panel.
7.3 — Select the topic you created in Step 5 and click on "Delete".
7.4 — You will be asked for confirmation. Type "delete me" to confirm and click on "Delete".
Delete your Aurora Serverless cluster
7.5 — Go to the Databases page in the Amazon RDS console.
From the Clusters list, select the Aurora Serverless v2 DB instance that's nested underneath your cluster.
If you called the cluster myclustername, the database instance name should be similar to myclustername-instance-1.
From the Actions menu, select Delete.
7.6 — When prompted, confirm deletion by typing delete me into the appropriate field.
If you set up a multi-AZ cluster or added any reader instances so that the cluster contains more than one DB instance, repeat this step for each DB instance in the cluster.
7.7. Once all DB instances in the cluster are in Deleting state, select the myclustername cluster, and from the Actions menu select Delete.
You don't need to wait until the DB instances are actually gone. The cluster item on the console page represents the storage for your database tables and other objects. The data in the cluster remains intact until you delete the cluster, regardless of how many instances are in the cluster.
7.8. Confirm that you really intend to delete the Aurora cluster.
Uncheck the box for Create final snapshot.
Select the acknowledgment checkbox to confirm it's OK to delete the data in the Aurora cluster.
If you created any information during your testing that you want to preserve, you can leave Create final snapshot selected. This will allow you to retrieve the data later in a new Aurora cluster.
Then, enter "delete me" to confirm deletion.
Delete your secret
7.9 — Go to the AWS Secrets Manager, find the secret containing the "RDS database admin credentials for the name of your cluster", and click on the name of the secret.
7.10 — Click on "Actions > Delete secret".
7.11 — The secrets can't be deleted immediately. The minimum waiting period for a scheduled deletion is 7 days. Select 7 days and click on "Schedule deletion".
Delete your IAM role
7.12 — Go to the AWS IAM console. Click "Roles" and search for the IAM role you created, AWSLambdaFunctionAuroraTutorial.
Select the role and click on "Delete role".
7.13 — To confirm deletion, enter the role name in the text input field.
Conclusion
You have created an Aurora Serverless database and connected it with an AWS Lambda Function via Aurora's Data API. You configured Amazon Simple Notification Service (SNS) as a trigger for your Lambda Function, and the messages you sent via SNS were processed and stored in your Aurora Serverless database.
Recommended next steps
Learn more about Amazon Aurora features
Find out more about the features of Amazon Aurora with the Amazon Aurora User Guide.
Best practices with Amazon Aurora
Learn about general best practices and options for using or migrating data to an Amazon Aurora DB cluster.
Learn more about Serverless
If you want to know more about serverless applications, check the AWS Lambda Documentation, as well as the User Guide for Aurora.