Data Protection
As an AWS customer, you benefit from a data center and network architecture that is built to meet the requirements of the most security-sensitive organizations in the world, regardless the sensitivity of your data and workloads. You also get advanced security services designed by engineers with deep insight into global security trends, designed to work together and with products you already know and trust.
AWS also provides a wide range of security tools, with more than 230 security, compliance, and governance services and features available to customers to secure their applications. An example of this is the AWS Nitro System, the underlying platform for all modern Amazon Elastic Compute Cloud (Amazon EC2) instances, provides additional confidentiality and privacy for your applications. By design the Nitro System has no operator access. There is no mechanism for any system or person to log in to EC2 Nitro hosts, access the memory of EC2 instances, or access any customer data stored on local encrypted instance storage or remote encrypted EBS volumes. If any AWS operator, including those with the highest privileges, needs to do maintenance work on an EC2 server, they can only use a limited set of authenticated, authorized, logged, and audited administrative APIs. None of these APIs provide an operator the ability to access customer data on the EC2 server. Because these are designed and tested technical restrictions built into the Nitro System itself, no AWS operator can bypass these controls and protections. As part of our commitment to increased transparency, we engaged the NCC Group, a leading cybersecurity consulting firm, to conduct an architecture review of our security claims of the Nitro System and produce a public report. Their report confirms that the AWS Nitro System, by design, has no mechanism for anyone at AWS to access your data on Nitro hosts. We also added the Nitro controls in our AWS Service Terms (section 96), which are applicable to anyone who uses AWS.
For more information on Nitro, read our Confidential Compute blog post and the AWS Nitro Security Whitepaper, which describes in detail the security mechanisms in place. Nitro is available for all modern Amazon EC2 instances automatically and at no additional cost to the customer.
You. You own your customer content, and you select which AWS services can process and store your customer content. We do not access or use your customer content for any purpose without your agreement. You control the security and access, identity management, access permissions and authentication methods, retention, and deletion of your customer content.
Customer content is defined as software (including machine images), data, text, audio, video, or images that a customer or any end user transfers to us for processing, storage, or hosting by AWS services in connection with a customer's account, and any computational results that a customer or their end user derives from the foregoing through their use of AWS services. For example, customer content includes content that a customer or their end user stores in Amazon Simple Storage Service (S3). Customer content does not include information included in resource identifiers, metadata tags, usage policies, permissions, and similar items related to the management of AWS resources.
For more information, visit our Data Privacy FAQ.
As a customer, you determine where your content is stored, including the type of storage and geographic region of that storage. The AWS Global Infrastructure gives you the flexibility to choose how and where you want to run your workloads. AWS will not move or replicate your content outside of your chosen AWS Region(s) without your agreement, except in each case as necessary to comply with the law or a binding order of a governmental body. Independent external auditors verified this commitment as part of our compliance program, for example, the German government's C5 attestation. Our certification reports are available on AWS Artifact.
This allows customers to use AWS services with confidence that customer content remains within the chosen AWS Region. For example, by choosing the AWS Europe (Paris) Region, composed of three Availability Zones (AZ), each including at least one data center, AWS customers control the location of their data and can build highly available applications on French territory. A small number of AWS services involve the transfer of data, for example, to develop and improve those services, where you can opt-out of the transfer, or because transfer is an essential part of the service (such as a content delivery service). For more information, visit our Privacy Features of AWS Services page.
To learn more about controlling data location, see our whitepaper Using AWS in the Context of Common Privacy & Data Protection Considerations (section “AWS Regions: Where will content be stored?”).
No. We prohibit – and our systems are designed to prevent – remote access by AWS personnel to customer content for any purpose, including service maintenance, unless that access is requested by the customer, or is required to prevent fraud and abuse, or is required to comply with law. Only persons authorized by the customer can access the content, without exception, as verified by independent external auditors as part of our C5 attestation.
At AWS, we believe that the best security tools should not compromise on cost, ease of operation, or performance. We recommend that customers encrypt their customer data in the cloud, and we provide them with tools, like AWS Key Management Service (KMS), to do so in a scalable, durable, and highly available manner. AWS asserts as a fundamental security principle that there is no human interaction with plaintext cryptographic key material of any type in any AWS service. There is no mechanism for anyone, including AWS service operators, to view, access, or export plaintext key material. This principle applies even during catastrophic failures and disaster recovery events. Plaintext customer key material in AWS KMS is used for cryptographic operations within AWS KMS FIPS validated HSMs only in response to authorized requests made to the service by the customer or their delegate.
AWS KMS uses AWS-managed hardware security modules (HSMs) that are validated under the NIST Federal Information Processing Standards (FIPS) 140 Cryptographic Module Validation Program to protect the confidentiality and integrity of your keys. All key material for KMS keys generated within these HSMs and all operations that require decrypted KMS key material occur strictly within these FIPS 140-2 Security Level 3 validated HSMs. Per FIPS 140 requirements, all firmware changes to KMS HSMs are submitted to a NIST accredited lab for validation in compliance with FIPS 140 Security Level 3. To learn more about how AWS KMS is architected and the cryptography it uses to secure keys, read the AWS KMS Cryptographic Details whitepaper.
AWS offers a couple of advanced configuration options for AWS KMS, called custom key stores, that combine the convenient key management interface of AWS KMS with the ability to control and manage the HSM(s) where key material and cryptographic operations occur or even to use your own HSMs outside of the AWS Cloud. These custom key store options may be useful to help you meet your regulatory requirements to either store and use your encryption keys on premises or outside of the AWS Cloud or use single-tenant HSMs. These custom key stores offer the same security benefits of AWS KMS, but have different (and higher) management and cost implications. As a result, you assume more responsibility for the availability and durability of cryptographic keys and for the operation of the HSMs. Regardless of whether you use AWS KMS with AWS-managed HSMs, or you choose to leverage a custom key store, AWS KMS enables you to maintain control over who can use your AWS KMS keys and gain access to your encrypted data. AWS KMS supports two types of custom key stores:
AWS CloudHSM key store
You can create a KMS key in an AWS CloudHSM key store, where KMS key material is generated, stored and used in a CloudHSM cluster that you own and manage. Requests to AWS KMS to use a key for some cryptographic operations are forwarded to your CloudHSM cluster to perform the operation. While a CloudHSM cluster is hosted by AWS, it is a single-tenant solution that is directly managed and operated by you. You own much of the availability and performance of keys in a CloudHSM cluster. To see if a CloudHSM backed key store is a good fit for your requirements, read this blog.
External key store
You can configure AWS KMS to use an External Key Store (XKS), where KMS key material is generated, stored and used in a key management system outside the AWS Cloud. Requests to AWS KMS to use a key for some cryptographic operations are forwarded to your externally hosted system to perform the operation. Specifically, requests are forwarded to an XKS Proxy in your network, which then forwards the request to your preferred cryptographic system. The XKS Proxy is an open-sourced specification that you can integrate with your on-premises solution or many commercial key management vendors, many of which have existing solutions that support the XKS Proxy specification. Because an External Key Store is hosted by you or some third party, you own all of the availability, durability, and performance of the keys in the system. To see if XKS is a good fit for your requirements, read this blog.
AWS European Sovereign Cloud
We announced plans to launch the AWS European Sovereign Cloud, a new, independent cloud for Europe, designed to help public sector organizations and customers in highly regulated industries meet their evolving sovereignty needs. We’re designing the AWS European Sovereign Cloud to be separate and independent from our existing Regions, with infrastructure located wholly within the European Union (EU), with the same security, availability, and performance our customers get from existing Regions today. As with all current Regions, customers using the AWS European Sovereign Cloud will benefit from the full power of AWS with the same familiar architecture, expansive service portfolio, and APIs that millions of customers use today.
To deliver enhanced operational autonomy and resilience within the EU, only personnel who are EU residents, located in the EU, will have control of day-to-day operations, including access to data centers, technical support, and customer service of the AWS European Sovereign Cloud. To learn more, read our announcement or watch the video below.
The AWS European Sovereign Cloud is set to launch its first AWS Region in Germany by the end of 2025.
The first AWS Region of the AWS European Sovereign Cloud will be located in the State of Brandenburg, Germany. Learn more in the Amazon News blog, AWS plans to invest €7.8 billion into the AWS European Sovereign Cloud.
When launching a new Region, we start with the core services needed to support critical workloads and applications and then continue to expand our service catalog based on customer and partner demand. The AWS European Sovereign Cloud will initially feature services from a range of categories, including for artificial intelligence – Amazon SageMaker, Amazon Q, and Amazon Bedrock, compute – Amazon EC2 and AWS Lambda, containers – Amazon Elastic Kubernetes Service (Amazon EKS) and Amazon Elastic Container Service (Amazon ECS), database – Amazon Aurora, Amazon DynamoDB, and Amazon Relational Database Service (Amazon RDS), networking – Amazon Virtual Private Cloud (Amazon VPC), security – AWS Key Management Service (AWS KMS) and AWS Private Certificate Authority, and storage – Amazon Simple Storage Service (Amazon S3) and Amazon Elastic Block Store (Amazon EBS). Read the blog post for more details on the announcement and the roadmap of initial services.
Portability and Interoperability
AWS interconnects directly with many other networks, including those of other cloud providers, to help customers enjoy a reliable data transfer experience across different providers and networks. If a customer decides to move to another IT provider, we want to remove barriers which make it harder to do so, because our focus is on building long-term customer trust and removing these barriers makes AWS attractive to new and returning customers. Globally, customers are entitled to free data transfers out to the internet if they want to move to outside of AWS. Learn more on our data transfers blog post.
AWS supports the European Union (EU) data protection standards for cloud infrastructure services, such as the Switching Cloud Providers and Porting Data (SWIPO) Code of Conduct.
We also support multiple industry initiatives–including the data driven economy envisioned by Gaia-X. Gaia-X is an initiative that aims to bring together representatives from business, science, and politics in Europe to help define requirements for the next generation of data infrastructure, which includes an open, transparent, and secure digital ecosystem in which data and services can be made available, collated, and shared in an environment of trust. AWS joined Gaia-X to help European customers and partners accelerate cloud-driven innovation in Europe in a secure and federated digital ecosystem that fosters openness and transparency. You can learn more about this initiative in our blog post.
To facilitate application layer portability, AWS offers numerous open source solutions, including MySQL (Amazon RDS), PostgreSQL (Amazon RDS), Apache Kafka (Amazon Managed Streaming for Apache Kafka), Kubernetes (Amazon EKS), Elasticsearch (Amazon OpenSearch Service), MongoDB (Amazon DocumentDB), Apache Cassandra (Amazon Keyspaces (for Apache Cassandra)), Apache Hadoop (Amazon Elastic MapReduce (EMR)). At the infrastructure layer, VMware Cloud on AWS and Red Hat OpenShift on AWS are also available for customers who want to use these popular technologies with a consistent experience in the AWS Cloud.
Yes, it is possible to transfer or replicate data externally from the AWS Cloud. AWS supports customer choice, including the option to migrate your data to another cloud provider or on-premises. We do not charge for data transfer out to the internet (DTO) when you want to migrate your data to another cloud provider or on-premises.
AWS offers a wide range of solutions that allow data to be transferred, either directly over the Internet, or via a private network (AWS Direct Connect) or via physical equipment for transferring large volumes of data made available to AWS customers (AWS Snow Family). Data formats are under full customer control. In addition, our contractual terms specify that customers can retrieve their data at any time (see the AWS Customer Agreement).
At AWS, we design cloud services to give customers the freedom to choose technology that best suits their needs, and our commitment to interoperability is a key reason customers choose AWS in the first place. Our open APIs and Software Development Kits (SDKs), services such as Amazon Elastic Container Service and Amazon EKS Anywhere, as well as our hybrid infrastructure services like the AWS Outposts Family and AWS Snow Family, allow customers and third parties to build compatible software and solutions. We have been at the forefront of developing technical solutions that allow customers to run their applications on AWS and still connect to other cloud providers, or on-premises, for any application dependencies.
Resilience
Cloud resilience refers to the ability for an application to resist or recover from disruptions, including those related to infrastructure, dependent services, misconfigurations, transient network issues, and load spikes. Cloud resilience also plays a critical role in an organization’s broader business resilience strategy, including the ability to meet digital sovereignty requirements. Customers need to know that their workloads in the cloud will continue to operate in the face of natural disasters, network disruptions, and disruptions due to geopolitical crises. Public sector organizations and customers in highly regulated industries rely on AWS to provide the highest level of resilience and security to help meet their needs. AWS protects millions of active customers worldwide across diverse industries and use cases, including large enterprises, startups, schools, and government agencies.
The AWS Global Cloud Infrastructure is designed to enable customers to build highly resilient workload architectures. AWS has made significant investments in building and running the world’s most resilient cloud by building safeguards into our service design and deployment mechanisms and instilling resilience into our operational culture. We build to guard against outages and incidents, and account for them in the design of AWS services—so when disruptions do occur, their impact on customers and the continuity of services is as minimal as possible. To avoid single points of failure, we minimize interconnectedness within our global infrastructure. The AWS global infrastructure is geographically dispersed, spanning 105 Availability Zones (AZs) within 33 AWS Regions around the world.
Each Region is comprised of multiple Availability Zones, and each AZ includes one or more discrete data centers with independent and redundant power infrastructure, networking, and connectivity. Availability Zones in a Region are meaningfully distant from each other, up to 60 miles (approximately 100 km) to help prevent correlated failures, but close enough to use synchronous replication with single-digit millisecond latency. AWS is the only cloud provider to offer three or more Availability Zones within each of its Regions, providing more redundancy and better isolation to contain issues. Common points of failure, such as generators and cooling equipment, aren’t shared across Availability Zones and are designed to be supplied by independent power substations. To better isolate issues and achieve high availability, customers can partition applications across multiple Availability Zones in the same Region. Learn more about how AWS maintains operational resilience and continuity of service.
Resilience is deeply ingrained in how we design services. At AWS, the services we build must meet extremely high availability targets. We think carefully about the dependencies that our systems take. Our systems are designed to stay resilient even when those dependencies are impaired; we use what is called static stability to achieve this level of resilience. This means that systems operate in a static state and continue to operate as normal without needing to make changes during a failure or when dependencies are unavailable. For example, in Amazon Elastic Compute Cloud (Amazon EC2), after an instance is launched, it’s just as available as a physical server in a data center. The same property holds for other AWS resources such as virtual private clouds (VPCs), Amazon Simple Storage Service (Amazon S3) buckets and objects, and Amazon Elastic Block Store (Amazon EBS) volumes. Learn more in our Fault Isolation Boundaries whitepaper.
Operational resilience is a shared responsibility. AWS is responsible for ensuring that the services used by our customers—the building blocks for their applications—are continuously available, as well as ensuring that we are prepared to handle a wide range of events that could affect our infrastructure. We provide resources that explore the customers’ responsibility for operational resilience—how customers can design, deploy, and test their applications on AWS to achieve the availability and resiliency they need, including for mission-critical applications that require almost no downtime. Learn more about the Shared Responsibility Model for Resiliency.
AWS provides a comprehensive set of purpose-built resilience services, strategies, and architectural best practices that you can use to improve your resilience posture and meet your sovereignty goals. These services, strategies, and best practices are outlined in the AWS Resilience Lifecycle Framework across five stages—Set Objectives, Design and Implement, Evaluate and Test, Operate, and Respond and Learn. The Resilience Lifecycle Framework is modeled after a standard software development lifecycle, so customers can easily incorporate resilience into their existing processes.
You can use the AWS Resilience Hub to set your resilience objectives, evaluate their resilience posture against those objectives, and implement recommendations for improvement based on the AWS Well-Architected Framework and AWS Trusted Advisor. Within Resilience Hub, you can create and run AWS Fault Injection Service experiments, which allow them to test how their application will respond to certain types of disruptions. Other AWS resilience services such as AWS Backup, AWS Elastic Disaster Recovery, and Amazon Route53 Application Recovery Controller that can help you quickly respond and recover from disruptions. AWS also offers resources like the Fault Isolation Boundaries whitepaper that details how AWS uses boundaries to create zonal, Regional, and global services, and includes prescriptive guidance on how to consider dependencies on different services and how to improve the resilience of customer workloads.
AWS offers multiple ways for you to achieve your resilience goals, including assistance from AWS Partners and AWS Professional Services. AWS Resilience Competency Partners specialize in improving customers’ critical workloads’ availability and resilience in the cloud. AWS Professional Services offers Resilience Architecture Readiness Assessments, which assess customer capabilities in eight critical domains—change management, disaster recovery, durability, observability, operations, redundancy, scalability, and testing—to identify gaps and areas for improvement.
Transparency and Assurances
AWS regularly undergoes independent third-party attestation audits to provide assurance that control activities are operating as intended. You inherit the latest security controls operated by AWS, strengthening your own compliance and certification programs. AWS supports over 140 security standards and compliance certifications, including ISO standards, the French HDS certification, the German C5 attestation, the DSS and SOC standards. AWS applies the same security model to all customers, whether they have advanced data protection needs or more standard needs. This is how our customers benefit from the most demanding set of security controls, required, for example, for personal, banking and health data, even if they do not process such sensitive data and regardless of the scale or criticality of their applications. We demonstrate our compliance posture to help you verify compliance with industry and government requirements. We provide compliance certificates, reports, and other documentation directly to you via the self-service portal known as AWS Artifact.
Yes. AWS is committed to enabling customers to use all AWS services in compliance with the EU’s data protection regulations, including the General Data Protection Regulation (GDPR). AWS customers can use all AWS services to process personal data (as defined in the GDPR) that is uploaded to the AWS services under their AWS accounts (customer data) in compliance with the GDPR. In addition to our own compliance, AWS is committed to offering services and resources to our customers to help them comply with the GDPR requirements that may apply to their activities. For more information, visit the GDPR Center.
Our Data Processing Addendum (AWS DPA), including Standard Contractual Clauses (SCC), automatically applies to our customers who are subject to the GDPR. The AWS Service Terms include the SCCs adopted by the European Commission (EC) in June 2021, and the AWS DPA confirms that the SCCs will apply automatically whenever an AWS customer uses AWS services to transfer customer data to countries outside of the European Economic Area that have not received an adequacy decision from the EC (third countries). As part of the AWS Service Terms, the new SCCs will apply automatically whenever a customer uses AWS services to transfer customer data to third countries. For more information, please see the blog post on the implementation of the new Standard Contractual Clauses. You can also read our guide, Navigating Compliance with EU Data Transfer Requirements.
Yes. AWS has declared more than 100 services under the Data Protection Code of Conduct for Cloud Infrastructure Service Providers in Europe (CISPE Code), which provides an independent verification and an added level of assurance to our customers that our cloud services can be used in compliance with the General Data Protection Regulation (GDPR). Validated by the European Data Protection Board (EDPB), acting on behalf of the 27 data protection authorities across Europe, and formally adopted by the French Data Protection Authority (CNIL), acting as the lead supervisory authority, the CISPE Code assures organizations that their cloud infrastructure service provider meets the requirements applicable to personal data processed on their behalf (customer data) under the GDPR. The CISPE Code also raises the bar on data protection and privacy for cloud services in Europe, going beyond current GDPR requirements. The CISPE Code helps customers ensure that their cloud infrastructure service provider offers appropriate operational assurances to demonstrate compliance with GDPR and protect customer data. The CISPE catalog (available online) lists the AWS services that have been independently verified as complying with the CISPE Code. The verification process was conducted by Ernst & Young CertifyPoint (EY CertifyPoint), an independent, globally recognized monitoring body accredited by CNIL. Visit the FAQs in our GDPR Center for more information on AWS compliance with the CISPE Code.
We are committed to earning customers’ trust with verifiable control over customer content access and increased transparency. We engaged NCC Group, a leading cybersecurity consulting firm to conduct an architecture review of our security claims of the AWS Nitro System and produce a public report. The report confirms that the AWS Nitro System, by design, has no mechanism for anyone at AWS to access your content on Nitro hosts. Learn more by reading our blog post.
Cryptographic Algorithms in AWS
Cryptography is an essential part of security for both AWS and our customers. AWS services already support encryption for data in transit, at rest, or in memory, with most also supporting encryption with customer managed keys that are inaccessible to AWS. With the AWS Digital Sovereignty Pledge, we committed to continuing to innovate and invest in additional controls for sovereignty and encryption features so that our customers can encrypt everything everywhere.
AWS is also committed to using the most secure available cryptographic algorithms to meet the security and performance requirements of our customers. We default to high-assurance algorithms and implementations, and we prefer hardware optimized solutions which offer speed, better security and less power utilization. The AWS Crypto Library (AWS-LC) represents our commitment to deliver optimized, high-assurance and formally verified, constant-time cryptographic algorithms. Where applicable, we follow the Shared Responsibility Model and offer our customers the ability to customize the use of cryptography to meet their individual security, compliance and performance requirements, while still meeting industry accepted security levels. For example, Elastic Load Balancing offers Application Load Balancers that provide various security policies for The Transport Layer Security (TLS) Protocol.
Interoperability and trust of cryptographic algorithms used to protect data is important. AWS services use cryptographic algorithms that meet industry standards and foster interoperability. These standards are widely accepted by governments, industry, and academia. It takes considerable analysis by the global community for an algorithm to become widely accepted. It also takes time for it to become widely available within the industry. Lack of analysis and availability introduces challenges to interoperability, complexity, and risks for deployments. AWS will continue to deploy new cryptographic options to meet our high security bar and performance requirements as necessary.
Below we summarize the cryptographic algorithms, ciphers, modes, and key sizes that AWS is deploying across its services to protect customer data. They should not be considered as an exhaustive list of all cryptography used in AWS.
The algorithms fall in two categories. Preferred are the algorithms that meet the bar according to the criteria above. Acceptable are trusted algorithms which can be used for compatibility in certain applications, but are not Preferred. Customers could take this information into consideration when making their own cryptographic choices in their encryption use-cases.
Status | |
Asymmetric Encryption | |
RSA-OAEP with 2048 or 3072-bit modulus | Acceptable |
HPKE with P-256 or P-384, HKDF and AES-GCM | Acceptable |
Asymmetric Key Agreement | |
ECDH(E) with P-384 | Preferred |
ECDH(E) with P-256, P-521, or X25519 | Acceptable |
ECDH(E) with brainpool curves | Acceptable |
Block Ciphers and Modes | |
AES-GCM-256 | Preferred |
AES-XTS-256 | Acceptable |
AES-GCM-128 | Acceptable |
ChaCha20/Poly1305 | Acceptable |
CBC / CTR / CCM modes (with AES-128 or AES-256) | Acceptable |
Hashing | |
SHA2-384 | Preferred |
SHA2-256 | Acceptable |
SHA3 | Acceptable |
Key Derivation | |
HKDF_Expand with SHA2-256 | Preferred |
Counter Mode KDF with HMAC-SHA2-256 | Acceptable |
HKDF with SHA2-256 | Acceptable |
Key Wrapping | |
AES-KW or AES-KWP with 256-bit keys | Acceptable |
AES-GCM-256 | Acceptable |
Message Authentication Code (MAC) | |
HMAC-SHA2-384 | Preferred |
HMAC-SHA2-256 | Acceptable |
KMAC | Acceptable |
Password hashing | |
scrypt with SHA384 | Preferred |
PBKDF2 | Acceptable |
Post-quantum algorithms | |
ML-KEM-768 combined with ECDH in PQ-hybrid key exchanges | Preferred |
SLH-DSA | Preferred (in software/firmwares signing) |
Signatures | |
ECDSA with P-384 | Preferred |
ECDSA with P-256, P-521, or Ed25519 | Acceptable |
RSA-2048 or RSA-3072 | Acceptable |
AWS tracks cryptographic developments, security issues and research results closely. We remove deprecated algorithms and addresses security issues as they are discovered. Examples include the logjam security issue, or removing early experimental deployments of the SIKE algorithm in post-quantum hybrid mode after it was broken although it did not pose a security risk. Other examples include common security problems related to side-channels in the padding of CBC mode implementations. AWS remains committed to identifying compatibility issues with legacy clients which use low security algorithms and working with our customers to help them migrate to secure options.
AWS remains involved in new cryptographic areas which include post-quantum cryptography, computing on encrypted data, and more. We are preparing for their deployment in our use-cases to protect customer data.
Resources
For more information on digital sovereignty in Europe, visit our resources.