This Guidance demonstrates how to provision data for collaboration using AWS Clean Rooms. Data connectors capture data sources through data ingestion and data preparation. The data is then imported into AWS and made available for collaboration.
Architecture Diagram
Step 1
Data stored in external applications needs to be ingested into Amazon Simple Storage Service (Amazon S3). Either export data directly from a SaaS application that supports a native Amazon S3 connector. Use an AWS Glue extract, transform, and load (ETL) service to pull data from relational databases.
Step 2
Create a rule in Amazon EventBridge to schedule the data processing in AWS Step Functions. The function includes data ingestion and downstream processing steps.
Step 3
Use the AWS Lambda function to decrypt the files from the source Amazon S3 bucket using AWS Key Management Service (AWS KMS) and place them in a different prefix for AWS Glue DataBrew to pick up and process.
Step 4
Use AWS Glue DataBrew recipe to transform the data from the decrypted source Amazon S3 location. Use this step to normalize, and secure Personal Identifiable Information (PII) data using the SHA256 hashing algorithm.
Step 5
The output of the AWS Glue DataBrew recipe is written to the target Amazon S3 bucket:prefix location in parquet format.
Step 6
An AWS Glue Crawler job is initiated to "refresh" the table definition and its associated meta-data.
Step 7
After the AWS Glue Crawler job concludes, a Lambda function moves the source data files to an "archive" prefix location as part of clean-up activity.
Step 8
An event is published to Amazon Simple Notification Service (Amazon SNS) to inform the user that the new data files are now available for consumption within AWS Clean Rooms.
Step 9
The user can use the latest data within the AWS Clean Rooms service to collaborate with other data producers.
Security, Logging, and Audit
The solution uses the following AWS services to promote security and access control:
AWS Identity and Access Management (IAM): Least-privilege access to specific resources and operations
AWS KMS: Provides encryption for data at rest and data in transit (using PGP encryption of data files)
Secrets Manager: Provides hashing keys for PII data
Amazon CloudWatch: Monitors logs and metrics across all services used in this solution
Well-Architected Pillars
The AWS Well-Architected Framework helps you understand the pros and cons of the decisions you make when building systems in the cloud. The six pillars of the Framework allow you to learn architectural best practices for designing and operating reliable, secure, efficient, cost-effective, and sustainable systems. Using the AWS Well-Architected Tool, available at no charge in the AWS Management Console, you can review your workloads against these best practices by answering a set of questions for each pillar.
The architecture diagram above is an example of a Solution created with Well-Architected best practices in mind. To be fully Well-Architected, you should follow as many Well-Architected best practices as possible.
-
Operational Excellence
Every service has built-in observability, with metrics published to CloudWatch, where dashboards and alarms are then configured.
-
Security
IAM policies are created using the least-privilege access, so every policy is restricted to the specific resource and operation. Secrets, keys, and configuration items are centrally managed and secured using the AWS KMS service. The data at rest in the Amazon S3 bucket is encrypted using AWS KMS keys. File transfers into Amazon S3 are secured using Pretty Good Privacy (PGP) encryption and tunnel level TLS 1.2 encryption for API calls. Data transfer through API calls are encrypted using TLS 1.2.
-
Reliability
Every service or technology for each architecture layer is fully managed by AWS, making the overall architecture elastic, highly available, and fault-tolerant. Incremental data processing is not included in the solution. This solution is built using a multi-tier architecture, where every tier is independently scalable, deployable, and testable.
-
Performance Efficiency
Using serverless technologies, you only provision the exact resources you use. The serverless architecture reduces the amount of underlying infrastructure you need to manage, allowing you to focus on solving your business needs. All components of the solution are collocated in a single region and uses a serverless stack, which avoids the need for you to make infrastructure location decisions apart from the region choice. You can use automated deployments to deploy the solution components into any region quickly, providing data residence and reduced latency. Experiments and tests can be performed against different load levels, configurations, and services.
-
Cost Optimization
This Guidance utilizes managed services for cost optimization. As the data ingestion velocity increases and decreases, the costs align with usage. When AWS Glue is performing data transformations, you only pay for infrastructure while the processing is occurring. In addition, through a tenant solution model and resource tagging, you can automate cost usage alerts and measure costs specific to each tenant, application module, and service. IAM policies are created using the least-privilege access, such that every policy is restricted to the specific resource and operation.
-
Sustainability
By using serverless services, you maximize overall resource utilization and reduces the amount of energy required to operate the workload.
You can also use the AWS Customer Carbon Footprint Tool to calculate and track the environmental impact of the workload over time at any account, region, or service level.
Implementation Resources
A detailed guide is provided to experiment and use within your AWS account. Each stage of building the Guidance, including deployment, usage, and cleanup, is examined to prepare it for deployment.
Related Content
Disclaimer
The sample code; software libraries; command line tools; proofs of concept; templates; or other related technology (including any of the foregoing that are provided by our personnel) is provided to you as AWS Content under the AWS Customer Agreement, or the relevant written agreement between you and AWS (whichever applies). You should not use this AWS Content in your production accounts, or on production or other critical data. You are responsible for testing, securing, and optimizing the AWS Content, such as sample code, as appropriate for production grade use based on your specific quality control practices and standards. Deploying AWS Content may incur AWS charges for creating or using AWS chargeable resources, such as running Amazon EC2 instances or using Amazon S3 storage.