[SEO Subhead]
This Guidance demonstrates an automatically configured data lake on AWS using an event-driven, serverless, and scalable architecture. It leverages AWS managed services to ingest, store, process, and analyze data, offering a secure, flexible, and cost-effective design with proper data governance. This approach provides greater agility, flexibility, and reliability compared to traditional data management systems. The entire solution is built as a codified application using infrastructure-as-code (IaC) and a continuous integration, continuous delivery (CI/CD) pipeline.
Please note: [Disclaimer]
Architecture Diagram
[Architecture diagram description]
Step 1
The data administrator uploads JSON files in the Amazon Simple Storage Service (Amazon S3) raw bucket. Object creation in Amazon S3 triggers an event in Amazon EventBridge.
Step 2
EventBridge has a rule that sends a message in Amazon Simple Queue Service (Amazon SQS), which invokes an AWS Lambda function.
Step 3
The Lambda function triggers the AWS Step Functions workflow, in which another Lambda function reads files from the S3 raw bucket and performs transformation. It also writes the new set of JSON files in the S3 stage bucket.
Step 4
A Lambda function updates the Amazon DynamoDB table with the Step Functions job status.
Step 5
Once the files are created in the S3 stage bucket, it triggers an event in EventBridge, which has a rule that sends a message in Amazon SQS with created file details.
Step 6
The Eventbridge scheduler runs at certain intervals and invokes a Lambda function that retrieves messages from Amazon SQS and starts another Step Functions workflow.
Step 7
AWS Glue extract, transform, load (ETL) reads the data from the AWS Glue database stage, then converts the files from JSON to Parquet format.
Step 8
AWS Glue ETL writes the Parquet files in the S3 analytics bucket. AWS Glue crawler crawls the Parquet files in the same bucket and then creates analytics tables in AWS Glue database analytics.
Step 9
All the staging and analytics catalogs are maintained in the AWS Glue Data Catalog.
Step 10
A Lambda function updates the DynamoDB table with the Step Functions job status.
Step 11
Business analysts use Amazon Athena to query the AWS Glue database analytics.
Get Started
Deploy this Guidance
Well-Architected Pillars
The AWS Well-Architected Framework helps you understand the pros and cons of the decisions you make when building systems in the cloud. The six pillars of the Framework allow you to learn architectural best practices for designing and operating reliable, secure, efficient, cost-effective, and sustainable systems. Using the AWS Well-Architected Tool, available at no charge in the AWS Management Console, you can review your workloads against these best practices by answering a set of questions for each pillar.
The architecture diagram above is an example of a Solution created with Well-Architected best practices in mind. To be fully Well-Architected, you should follow as many Well-Architected best practices as possible.
-
Operational Excellence
Amazon CloudWatch provides comprehensive insights into the performance and health through operational logging from every architectural component. Use Amazon S3 server access logging to track detailed records of requests made to your data lake, empowering you to conduct security and access audits, as well as understand your Amazon S3 billing. DynamoDB meticulously tracks the status of your data lake pipeline jobs, enabling you to swiftly identify and resolve any errors that may arise.
-
Security
AWS Key Management Service (AWS KMS) safeguards your data lake by encrypting all data at rest using customer-managed keys. Protect data in transit with robust TLS 1.2 encryption. AWS Identity and Access Management (IAM) enables you to manage identities and access to your AWS services and resources with precision, through the principle of least privilege.
-
Reliability
Amazon S3 serves as the highly durable and available storage layer. Data pipelines are triggered through EventBridge, which sends messages to Amazon SQS to initiate pipeline jobs. Errors are handled by moving messages to a dead letter queue for debugging and reprocessing. The Guidance can be redeployed to another AWS Region or account in case of regional failure, ensuring flexibility and resilience.
-
Performance Efficiency
This solution optimizes performance by using Lambda for lightweight tasks and AWS Glue for heavy data transformations. AWS Glue, a serverless data integration service, simplifies and accelerates data preparation while reducing costs. It leverages Apache Spark for scalable execution of transformation jobs. Step Functions orchestrates AWS Glue jobs, providing distributed processing capabilities to enhance the data pipeline's performance.
-
Cost Optimization
This Guidance uses serverless AWS services, reducing total ownership costs and enabling scalability based on demand. Amazon S3 serves as the storage layer, offering various cost-efficient storage classes with automated lifecycle management for diverse data access patterns. By shifting infrastructure management to AWS, the serverless approach allows developers to focus on code, further lowering costs and improving efficiency.
-
Sustainability
Serverless services in this Guidance scale based on demand, maximizing energy efficiency and minimizing compute resources. Amazon S3 implements data lifecycle policies and stores ingested data in Parquet format. This compressed format reduces data scans per query, further decreasing compute resources needed for the workload. The combination of serverless architecture and efficient data storage optimizes overall performance and resource utilization.
Related Content
[Title]
Disclaimer
The sample code; software libraries; command line tools; proofs of concept; templates; or other related technology (including any of the foregoing that are provided by our personnel) is provided to you as AWS Content under the AWS Customer Agreement, or the relevant written agreement between you and AWS (whichever applies). You should not use this AWS Content in your production accounts, or on production or other critical data. You are responsible for testing, securing, and optimizing the AWS Content, such as sample code, as appropriate for production grade use based on your specific quality control practices and standards. Deploying AWS Content may incur AWS charges for creating or using AWS chargeable resources, such as running Amazon EC2 instances or using Amazon S3 storage.
References to third-party services or organizations in this Guidance do not imply an endorsement, sponsorship, or affiliation between Amazon or AWS and the third party. Guidance from AWS is a technical starting point, and you can customize your integration with third-party services when you deploy the architecture.