[SEO Subhead]
This Guidance demonstrates how to deploy the Google Privacy Sandbox “Aggregation Service” within a trusted execution environment (TEE) using AWS services. The Aggregation Service can be used to produce event or aggregate campaign measurement data through the Privacy Sandbox Attribution Reporting API (ARA) or Private Aggregation API. This Guidance includes several features to help streamline deployment for AWS customers, including:
- An overview of end-to-end collection, batching, and orchestration of aggregation jobs by the service
- Example implementations of how to perform Avro conversion on records before being processed by the service
- Example implementations to prepare report batches and data enrichment before processing by the service (in future releases)
- Example implementations of a collection service that enables endpoints to collect event-level and summary reports
Please note: [Disclaimer]
Architecture Diagram
[Architecture diagram description]
Step 1
A Chrome browser user with Privacy Sandbox features enabled is browsing a publisher’s site. The user performs an action that causes the Attribution Reporting API or Private Aggregation API to send event-level or aggregated reports to the Collector service.
Step 2
The Collector service exposes a series of well-known URL(s) to collect summary reports and event-level reports from the Chrome browser. AWS WAF protects the URLs.
Elastic Load Balancing distributes traffic to Amazon Elastic Container Service (Amazon ECS) and is used to host a service that validates that the requests are well formed. Well-formed requests are sent to Amazon Kinesis Data Streams.
Step 3
The Batching service is responsible for preparing the aggregated reports for processing by the Google Privacy Sandbox Aggregation Service. AWS Glue reads records from Kinesis Data Streams and persists the records to Amazon Simple Storage Service (Amazon S3) in JSON and Avro formats. An AWS Step Functions is invoked by Amazon EventBridge at a set interval to process batches.
Step 4
Reports and event-level data that can be aggregated are stored in Amazon S3.
Step 5
The Google Privacy Sandbox Aggregation Service is deployed using the community edition of Terraform, provided as part of the Privacy Sandbox initiative. The Aggregation Service uses Amazon API Gateway to receive requests from the Step Functions workflow.
AWS Lambda functions orchestrate processing jobs to produce summary reports. Amazon DynamoDB tracks the progress of processing jobs. AWS Nitro Enclaves provide a trusted execution environment (TEE) to process reports that can be aggregated.
Well-Architected Pillars
The AWS Well-Architected Framework helps you understand the pros and cons of the decisions you make when building systems in the cloud. The six pillars of the Framework allow you to learn architectural best practices for designing and operating reliable, secure, efficient, cost-effective, and sustainable systems. Using the AWS Well-Architected Tool, available at no charge in the AWS Management Console, you can review your workloads against these best practices by answering a set of questions for each pillar.
The architecture diagram above is an example of a Solution created with Well-Architected best practices in mind. To be fully Well-Architected, you should follow as many Well-Architected best practices as possible.
-
Operational Excellence
By harnessing the capabilities of Amazon CloudWatch and Amazon ECS, users can optimize their operational processes, reduce manual interventions, and maintain a high-performing, resilient infrastructure. Specifically, CloudWatch provides comprehensive logging and insights so users can monitor the performance and health of their services running on Amazon ECS. Users can also easily scale their workloads with Amazon ECS and adapt this Guidance to meet their changing demands.
-
Security
The comprehensive security services of AWS Identity and Access Management (IAM), AWS WAF, and AWS Key Management Service (AWS KMS) work together to fortify workloads. Specifically, IAM policies grant the minimum required access, adhering to the principle of least privilege. The AWS KMS service protects user data at rest by allowing users to easily manage cryptographic keys. Finally, public-facing endpoints are protected with AWS WAF, shielding workloads from malicious attacks and Distributed Denial of Service (DDoS) threats.
-
Reliability
Several AWS services facilitate workload recovery from failures or disruptions. Amazon S3 provides durable data storage with versioning and replication capabilities, safeguarding data against accidental deletions or application failures. Amazon ECS and Amazon Elastic Compute Cloud (Amazon EC2) distribute workloads across multiple Availability Zones for high availability. The capabilities of Kinesis Data Streams durably store and retain data for a specified period of time so that the data is not lost in the event of failures or disruptions. Elastic Load Balancing efficiently distributes traffic across resources, ensuring workloads can handle increased demand during disruptions or spikes.
-
Performance Efficiency
Optimize the performance of this Guidance with Amazon ECS and AWS Graviton Processors. Amazon ECS simplifies the scaling of containerized workloads, allowing users to dynamically adjust their compute resources to meet fluctuating demands. AWS Graviton processors include custom silicon with improved price performance, resulting in higher throughputs and lower latencies for requests.
-
Cost Optimization
Amazon S3, Amazon ECS, and AWS Glue work in tandem to deliver business value at the lowest possible cost while avoiding unnecessary expenses. Amazon S3 allows users to store and retrieve data at scale, paying only for the storage they use without the need to provision and manage physical infrastructure. Amazon ECS dynamically scales compute resources, helping to ensure users only pay for the resources consumed. And AWS Glue simplifies extract, transfer, and load (ETL) workloads, automatically provisioning the necessary resources and reducing maintenance overhead.
-
Sustainability
By using AWS Graviton-based instances for this Guidance, users can optimize their workloads for environmental efficiency and reduce their carbon footprint. AWS Graviton processors deliver up to 60% less energy consumption compared to traditional Amazon EC2 instances, so users can contribute to a more sustainable cloud infrastructure.
Implementation Resources
The sample code is a starting point. It is industry validated, prescriptive but not definitive, and a peek under the hood to help you begin.
Related Content
[Title]
Disclaimer
The sample code; software libraries; command line tools; proofs of concept; templates; or other related technology (including any of the foregoing that are provided by our personnel) is provided to you as AWS Content under the AWS Customer Agreement, or the relevant written agreement between you and AWS (whichever applies). You should not use this AWS Content in your production accounts, or on production or other critical data. You are responsible for testing, securing, and optimizing the AWS Content, such as sample code, as appropriate for production grade use based on your specific quality control practices and standards. Deploying AWS Content may incur AWS charges for creating or using AWS chargeable resources, such as running Amazon EC2 instances or using Amazon S3 storage.
References to third-party services or organizations in this Guidance do not imply an endorsement, sponsorship, or affiliation between Amazon or AWS and the third party. Guidance from AWS is a technical starting point, and you can customize your integration with third-party services when you deploy the architecture.