AWS Database Blog

Refactoring to Amazon DynamoDB

July 2023: This post was reviewed for accuracy.

Do you want to move from a relational database to NoSQL? In the following post, I walk you through reading, transforming, and writing SQL Server data from an Amazon EC2 instance to Amazon DynamoDB. I use AWS Glue to transform the source data model of multiple tables into two target tables in DynamoDB.

There are other options for data model transformation like AWS DMS or an AWS Marketplace tool in place of AWS Glue. For this one-time migration, I chose AWS Glue and Scala code for the transformation of several tables into one.

Overview

This demo shows you how to refactor a commercial relational database. The chosen database generates and sells sporting event tickets. I explain best practices for refactoring the database to DynamoDB and how to configure DynamoDB tables and transfer data using AWS Glue. I also show how to set up VPC endpoints and a secure IAM role for AWS Glue, how to crawl the source database and run an Apache Spark ETL job.

To get started right away, you can access the code in the Refactoring to DynamoDB repository.

The problem and proposed solution

When you spin up the AWS CloudFormation stack in your environment, the stack creates an EC2 instance with a resident ticketing database.

The database, DMS_SAMPLE, contains a set of configuration data for sporting venues, leagues, teams, and players. The data in these employee-managed tables don’t change often. The dedicated configuration application writes the data to these tables. You can also see tables for ticket sales in the SPORTING_EVENT_TICKET, TICKET_PURCHASE_HIST database, which catalogs, sells, and transfers sporting event tickets.

Because employees and consumers continually buy and sell new event tickets, this data changes frequently. Peak traffic volumes flow to the site after ticket sales launch. The database must scale to meet this fluctuating demand. The following diagram shows the database schema.

The event ticketing system examined in this demo includes two distinct use cases: ticket sales and retrievals. Each use case demonstrates distinct access patterns.

Ticket sales activity fluctuates sharply. The system’s steady-state predictably consists of a few hundred requests per hour. However, during peak events, requests can spike to hundreds of thousands of requests a minute. The service also sells tickets to resellers who then resell them to other individuals. The automated reseller systems can also cause infrequent, unpredictable activity spikes.

Users typically retrieve tickets based on the following criteria and access patterns:

  1. What tickets do I own?
    • Get all tickets I own
    • Get all tickets available for my venue and date
    • Purchase tickets
    • Transfer tickets to another user
  1. What tickets have I transferred to someone else?
    • Get tickets I transferred to another user
    • Get tickets transferred to me from another user

The application writing the configuration data performs well at baseline. However, peak ticket sales traffic for new events strains the relational database system. As users add more and more events, they have to increase database resources to handle the load.

On the other hand, the subsequent slowdown in ticket sales results in excess computing capacity. Because traffic is challenging to predict, users need a more flexible way to add and subtract capacity than the current system.

In general, a relational database system scales poorly for the following reasons:

  • The storage of data in multiple tables, requiring multiple queries to read/write to disk
  • The performance costs of an ACID-compliant relational transaction system at scale
  • The expensive joins required to reassemble views of query results

So, the problem appears to be, How can users scale more efficiently to meet fluctuating demand?

The proposed solution, DynamoDB, is fully managed NoSQL database service with seamless scalability. DynamoDB provides fast scale-out for peaks and fast scale-in for valleys. It provides consistent performance and automatically scales with increased traffic to the application. DynamoDB automatic scaling dynamically adjusts provisioned throughput capacity in response to actual traffic patterns.

DynamoDB scales well because of the following reasons:

  • Schema flexibility lets DynamoDB store complex hierarchical data within a single item.
  • Composite key design lets DynamoDB store related items close together on the same table.
  • Automatic sharding permits horizontal scale, so your data is handled concurrently by many servers with more resource transparency.

You can manage high-performance reads and writes with DynamoDB. After you design the right schema for your workload, you can expect consistent performance across widely varying loads.

Transferring data with DynamoDB

In this section, I show you how to:

  1. Configure DynamoDB table structure.
  2. Set up the files for your environment.
  3. Create resources using AWS CloudFormation.
  4. Generate some data in your source database.
  5. Extract, transform, and load data with AWS Glue.

Prerequisites

To follow along and run the code provided with this demo, you should have:

To follow along in your environment, download the files from the repository, and run the steps in order. Depending on how much data you are moving, you may be able to use Free Tier resources.

In general, make sure to do the following before moving a relational database to DynamoDB:

  • Understand the use case or business functionality that you plan to support.
  • Understand the typical data access patterns.
  • Agree on the goal, problem, or aspect of the system that you are trying to optimize.

For additional information, see Best Practices for DynamoDB.

Configure DynamoDB table structure

With DynamoDB, you can combine the tickets that generate with the tickets that sell into a single table and use status/owner to track the current state of the ticket. The following screenshot demonstrates a combined ticket information table.

For the configuration data, use an overloaded table structure—that is, a structure that combines multiple objects into a single table. This table structure provides data to track ticket generation.

The following screenshot shows how a few of the tables model this data. I use the partition key to group together similar data and a sort key for unique values in the configuration data. You can use the descriptive name field to find items by name (GSI) and any other fields that have keywords in them as sparse GSI values for targeted inquiries.

Set up the files for your environment

Next, set up your environment for refactoring.

  1. If you haven’t already, download the files from the GitHub repository to your local machine. Then, edit the parameters.json file, updating the following values with information from your AWS account environment.
    1. Replace the KeyName parameter with your EC2 key pair name. Remember that you use your EC2 key to log into the EC2 instance launched by AWS CloudFormation to store your SQL Server database.
    2. Replace the IPAddressForRDP parameter (default of 0.0.0/0) with your IP address. You can use https://www.whatsmyip.org to retrieve your IP address.
    3. Replace the S3ScriptLocation parameter value of <bucket name> with your S3 bucket name.
    4. Replace the S3LocationForJarFiles parameter value of <bucket name> with your S3 bucket name.
  2. Upload the following files to an S3 bucket in your environment under the key <S3 bucketname>/scripts:
    RDBMStoDynamoDB.yml
    parameters.json
    config.scala 
    ticketsale.scala 
  3. To obtain the dependencies required to follow along, clone the repository awslabs/emr-dynamodb-connector using the following command:
    git clone https://github.com/awslabs/emr-dynamodb-connector.git
    1. After cloning completes, use the following command to build your repository:
      mvn clean install
    2. Using the same S3 bucket from the previous step, upload the following JAR files under the key <S3 bucketname>/jars:
      emr-dynamodb-connector/emr-dynamodb-hadoop/target
      /emr-dynamodb-hadoop-4.9.0-SNAPSHOT.jar 
      emr-dynamodb-connector/emr-dynamodb-hive/target/emr-dynamodb-hive-4.9.0-SNAPSHOT-jar-with-dependencies.jar

You are now ready to create the resources using AWS CloudFormation.

Create resources using AWS CloudFormation

Create a stack from the AWS CloudFormation template using the following command from CLI. Replace stack name and S3 Location with the appropriate information from your environment.

aws cloudformation create-stack --stack-name <stack name> \
    --template-url https://<your-bucket-name>.s3.amazonaws.com/scripts/RDBMStoDynamoDB.YAML
    --parameters https://<your-bucket-name>.s3.amazonaws.com/scripts/parameters.json \
    --capabilities CAPABILITY_NAMED_IAM

Generate some data in your source database

To generate data for your use case, you must log in to the EC2 instance and run a query. Here are the steps to obtain the IP address of your EC2 instance and connect to it:

  1. Obtain the Instance ID using the following command:
    aws cloudformation describe-stacks --stack-name <stack name> \
    --query "Stacks[0].Outputs[?OutputKey=='EC2InstanceID'].OutputValue" --output text
  2. Decrypt and retrieve the password data using the following command:
    aws ec2 get-password-data --instance-id <Instance ID> \
    --priv-launch-key <Location to pem file>
  3. Obtain the EC2 instance public IP address from the Output section using the following command:
    aws cloudformation describe-stacks --stack-name <stack name> \
    --query "Stacks[0].Outputs[?OutputKey=='PublicIPOfEC2InstanceWithSQLServer'].OutputValue" --output text

You can use Remote Desktop to sign into the EC2 instance using your IP address with User Name as administrator and password data from the previous commands.

  1. Launch SQL Server Management Studio from the instance taskbar with the following credentials:
    awssct/Password1
  2. Run the following query:
    USE dms_sample
    BEGIN 
        -- to generate lots of data, change the while condition to a value for
        -- the number of events you want to generate tickets for
        DECLARE @generatetkccnt INT = 0;
        WHILE @generatetkccnt < 10    
        BEGIN
            EXEC dbo.generate_tickets @event_id = @generatetkccnt 
            SET @generatetkccnt = @generatetkccnt + 1;
        END;
        -- this loop will sell approximately 10 sets of tickets, 
        -- to increase the number of tickets sold increase the counter 
        DECLARE @ticketcnt INT = 0;
        WHILE @ticketcnt < 10
        BEGIN 
            EXEC generateTicketActivity @max_transactions = @ticketcnt
            SET @ticketcnt = @ticketcnt + 1;  
        END;
        -- this loop will transfer approximately 5 sets of tickets, 
        -- to increase the number of tickets transferred increase the counter  
        DECLARE @tickettransfercnt INT = 0;
        WHILE @tickettransfercnt < 10 
        BEGIN       
            EXEC generateTransferActivity @max_transactions = @tickettransfercnt
            SET @tickettransfercnt = @tickettransfercnt + 1
        END;
    END;

This query should generate ticketing data and allow you to proceed with the next steps.

  1. Confirm ticket activity using the following queries:
    -- query to confirm sales     
    SELECT * FROM dms_sample.dbo.ticket_purchase_hist;
    -- query to confirm transfers    
    SELECT * FROM dms_sample.dbo.ticket_purchase_hist where transferred_from_id IS NOT NULL;

Extract, transform, and load data with AWS Glue

AWS Glue is a fully managed ETL service. You can create and run an ETL job with a few steps in the AWS Management Console or with code using the AWS CLI or AWS SDK.

In this section, I show you how to use AWS Glue to connect to your DynamoDB target and transfer data privately through S3:

  1. Launch an AWS Glue crawler to crawl the dms_sample database and get metadata. The process populates the AWS Glue Catalog with information about your source database and source tables.
    aws glue start-crawler --name GlueCrawlerForSqlServer

    The crawler uses JDBC connection to connect to SQL Server, and in a few minutes, it populates the AWS Glue database with tables of metadata.

  1. Use the following get tables command to get details about your tables:
    aws glue get-tables --database-name gluedatabase
  1. Run the AWS Glue job GlueTicketSaleJob for ticket sale. The job reads from the SQL Server source database and extract data from SPORTING_EVENT_TICKET and TICKET_PURCHASE_HIST. The job then uses Scala code in ticketsale.scala to transform the two tables into the single DynamoDB table, ticket_sales, as outlined at the beginning of this post:
    aws glue start-job-run --job-name GlueTicketSaleJob

    You should receive a JobRunId value as output from this command:

    {
     "JobRunId": "jr_f68bfd90a1fd16d24a7e2143aca20cb9c0b0472185d241b5cea37918f8de4c15"
    }

You can also start the second AWS Glue job, GlueConfigJob, for the supporting configuration data in parallel. The job reads from the SQL Server source database and extracts data from all the other configuration tables in the SQL Server database. The job uses the Scala code config.scala to transform the config tables into the overloaded single DynamoDB table, config:

aws glue start-job-run --job-name GlueConfigJob

You receive a new JobRunId as output from this command:

{
 "JobRunId": "jr_f30cd7703a9554e8215975c7f3e8b44a3f84f6c4c724c094bd4f4ca0a40f8b2e"
}

You can use the AWS CLI to monitor the status of your job:

aws glue get-job-runs --job-name GlueTicketSaleJob

The output lists the job runs for this job. It may take some time for the jobs to complete, but when they finish, the JobRunState says COMPLETE:

{
 "JobRuns": [
 {
 "Id": "jr_f68bfd90a1fd16d24a7e2143aca20cb9c0b0472185d241b5cea37918f8de4c15",
 "Attempt": 0,
 "JobName": "GlueTicketSaleJob",
 "StartedOn": 1558363337.602,
 "LastModifiedOn": 1558363341.778,
 "JobRunState": "RUNNING",
 "PredecessorRuns": [],
 "AllocatedCapacity": 10,
 "ExecutionTime": 0,
 "Timeout": 2880,
 "MaxCapacity": 10.0,
 "LogGroupName": "/aws-glue/jobs"
 }
 ]
}

After the job completes, you can review your results in DynamoDB. The following screenshot represents sample config data.

The following screenshot shows sample ticket sales data.

AWS CloudFormation overview

Review the items that you created for your AWS Glue job and consider how all your components fit together in AWS CloudFormation.

The following diagram shows your EC2 instance inside your public subnet, protected by a security group. The security group has a route to the CIDR range specified in the AWS CloudFormation template and a self-referencing rule for AWS Glue. You can access DynamoDB and S3 through VPC endpoints. Data flows from the SQL Server instance on your EC2 instance through AWS Glue and S3 to your DynamoDB tables.

VPC endpoints for Amazon S3 and DynamoDB

The next portion of the AWS CloudFormation template provides a VPC endpoint to connect your VPC to Amazon S3 privately. The VPC endpoint for S3 allows AWS Glue to use private IP addresses to access S3 with no exposure to the public internet. Any requests to S3 within this Region route to a private S3 endpoint within the AWS network:

  S3Endpoint:
    Type: AWS::EC2::VPCEndpoint
    Properties:
      PolicyDocument:
        Version: 2012-10-17
        Statement:
          - Effect: Allow
            Principal: '*'
            Action:
              - '*'
            Resource:
              - '*'
      VpcId: !Ref Vpc
      ServiceName: !Join 
        - ''
        - - com.amazonaws.
          - !Ref 'AWS::Region'
          - .s3
      RouteTableIds: 
        - !Ref PublicRouteTable

The following portion of the AWS CloudFormation template creates a VPC endpoint for DynamoDB. It’s required for AWS Glue to write to DynamoDB.

DynamoDEndPoint:
    Type: AWS::EC2::VPCEndpoint
    Properties: 
      PolicyDocument:
        Version: 2012-10-17
        Statement:
          - Effect: Allow
            Principal: '*'
            Action:
              - '*'
            Resource:
              - '*'
      VpcId: !Ref Vpc
      ServiceName: !Join
        - ''
        - - com.amazonaws.
          - !Ref 'AWS::Region'
          - .dynamodb
      RouteTableIds: 
        - !Ref PublicRouteTable

Setting an ingress rule

For AWS Glue to communicate between its components, a self-referencing Ingress rule is required to restrict the source to the same security group in the VPC and prevent traffic on other networks. To set the Ingress rule, run the following code:

SelfReferencingIngress:
    Type: AWS::EC2::SecurityGroupIngress
    Properties: 
      GroupId: !Ref InstanceSecurityGroup
      Description: Allow access between security groups.
      IpProtocol: tcp
      FromPort: '0'
      ToPort: '65535'
      SourceSecurityGroupId: !Ref InstanceSecurityGroup

Provisioning an IAM role

The AWS Glue job writes logs to Amazon CloudWatch and saves temporary logs to an S3 bucket. AWS Glue also must access the supporting jar files in your S3 bucket. AWS Glue crawler needs permissions to access your EC2 instance to crawl through Source data in SQL Server in EC2.

The role also has AWS Glue full access, allowing AWS Glue components to communicate with each other (for example, the AWS Glue crawler needs access to the AWS Glue connection). To provide IAM role access, run the following code:

GlueRole:
    Type: AWS::IAM::Role
    Properties: 
      AssumeRolePolicyDocument:
        Version: "2012-10-17"
        Statement: 
          - 
            Effect: "Allow"
            Principal: 
              Service: 
                - "glue.amazonaws.com"
            Action: 
              - "sts:AssumeRole"
      ManagedPolicyArns:
        - 'arn:aws:iam::aws:policy/AmazonEC2FullAccess'
        - 'arn:aws:iam::aws:policy/AmazonS3FullAccess'
        - 'arn:aws:iam::aws:policy/CloudWatchLogsFullAccess'  
        - 'arn:aws:iam::aws:policy/AmazonDynamoDBFullAccess'
        - 'arn:aws:iam::aws:policy/AWSGlueConsoleFullAccess'
      RoleName: GlueConnectionRole

Connecting AWS Glue with SQL

Your AWS Glue connection creates a new JDBC connection to connect to the SQL Server database on EC2. For more details, see AWS Glue connections.

GlueConnectionToSQLServer:
    Type: AWS::Glue::Connection
    Properties:
      CatalogId: !Ref AWS::AccountId
      ConnectionInput:
        Description: "Connect to SQL server database."
        Name: GlueConnectionToSQLServer
        ConnectionType: "JDBC"
        PhysicalConnectionRequirements:
          AvailabilityZone: 
            Fn::Select:
              - 0
              - Fn::GetAZs: ""
          SecurityGroupIdList:
            - !Ref InstanceSecurityGroup
          SubnetId: !Ref PublicSubnet
        ConnectionProperties: 
          "USERNAME": awssct
          "PASSWORD": Password1
          "JDBC_ENFORCE_SSL" : "false"
          "JDBC_CONNECTION_URL": !Sub
            - jdbc:sqlserver://${IpAddress}:1433;databaseName=dms_sample 
            - { IpAddress: !GetAtt EC2SourceDB.PrivateDnsName }

The next portion of the AWS CloudFormation template creates an AWS Glue database where the crawler saves the crawled metadata:

GlueDatabase:
    Type: AWS::Glue::Database
    Properties:
      DatabaseInput: 
        Name: gluedatabase
      CatalogId: !Ref AWS::AccountId

Crawling the source database

AWS Glue creates an AWS Glue crawler to crawl through the dms_sample database. The crawled tables are prefixed with Crawler_ss and saved in the AWS Glue database created in the previous section:

GlueCrawler:
    Type: AWS::Glue::Crawler
    Properties:
      Role: !GetAtt GlueRole.Arn
      Description: "Used to crawl the RDS for metadata"
      DatabaseName: !Ref GlueDatabase
      Targets: 
        JdbcTargets:
          - ConnectionName: !Ref GlueConnectionToSQLServer
            Path: "dms_sample/dbo/%"
      TablePrefix: "Crawler_ss" 
      Name: GlueCrawlerForSqlServer

Running an Apache Spark ETL job

The AWS Glue job takes the script in given S3 location and runs on demand. Use the command name glueetl for your Apache Spark ETL job:

GlueJob: 
    Type: AWS::Glue::Job
    Properties:
      Command: 
        Name: glueetl
        ScriptLocation: !Ref S3ScriptLocation
      ExecutionProperty:
        MaxConcurrentRuns: 2
      MaxRetries: 0
      Name: GlueJob
      Role: !GetAtt GlueRole.Arn

Conclusion

You’ve now seen how to move a ticketing database from your SQL Server RDBMS to DynamoDB using AWS Glue. To do the same for your application, see Best Practices for DynamoDB. Download code examples from the Refactoring-to-DynamoDB GitHub repo and modify them for your application and environment.

As always, AWS welcomes feedback. Please submit comments or questions below.

 


About the Authors

Wendy Neu has worked as a Data Architect with Amazon since January 2015. Prior to joining Amazon, she worked as a consultant in Cincinnati, OH helping customers integrate and manage their data from different unrelated data sources.

Soumya Vanga is an Associate Cloud Developer with AWS Professional Services in New York, NY helping customers design solutions, workloads, and to adopt Cloud Native services.