AWS for Industries

FSI Services Spotlight: Featuring Amazon Aurora

In this edition of the Financial Services Industry (FSI) Services Spotlight monthly blog series, we highlight five key considerations for customers running workloads on Amazon Aurora: achieving compliance, data protection, isolation of compute environments, audits with APIs, and access control/security. Across each area, we will examine specific guidance, suggested reference architectures, and technical code to help streamline service approval of Amazon Aurora.

Amazon Aurora (Aurora) is a fully managed relational database engine that’s compatible with MySQL and PostgreSQL. Aurora MySQL and PostgreSQL combine the speed and reliability of high-end commercial databases with the simplicity and cost-effectiveness of open-source databases. The code, tools, and applications you use today with your existing MySQL and PostgreSQL databases can be used with Aurora. With some workloads, Aurora can deliver up to five times the throughput of MySQL and up to three times the throughput of PostgreSQL without requiring changes to most of your existing applications.

Customers choose Aurora to improve availability, performance, scalability, and resiliency while lowering monthly costs. Aurora is highly redundant, replicating six copies of customer data across three Availability Zones. This design allows for Aurora to offer 99.99% availability when customers have at least one reader provisioned in a different Availability Zone. For instance, Experian uses Aurora to achieve increased uptime, improve response times by 10%, and reduce backup runtime by 95%. Chime was able to grow their infrastructure rapidly by migrating to Aurora. They completed the migration to Aurora with no downtime and were able to double the size of their database. Many other customers, including Dow Jones, Capital One, and Intuit, utilize Aurora to remove undifferentiated management heavy lifting of the relational database resources. This capability frees them to focus on delighting their users.

Achieving compliance with Amazon Aurora

Security and compliance are a shared responsibility between AWS and the customer. AWS will operate, manage, and protect the infrastructure that runs the AWS services. The customer’s responsibility is determined by the service selected; the more managed services are used, the less customer configuration is required. As Amazon Aurora is a managed service, customers are responsible for fewer controls to deploy secure transactional workloads with the MySQL and PostgreSQL-compatible database. On the customer’s side of the shared responsibility model, customers should first determine their requirements for network connectivity, encryption, and access to other AWS resources. We will dive deeper into those topics in the upcoming sections.

Aurora falls under the scope of the following compliance programs regarding AWS’s side of the shared responsibility model. The compliance programs covered by Aurora include:

  • SOC 1,2,3
  • PCI
  • IRAP Protected
  • ISO/IEC 27001:2013, 27017:2015, 27018:2019, and ISO/IEC 9001:2015
  • OSPAR
  • C5
  • MTCS

In the following sections, we will cover topics on the customer side of the shared responsibility model.

Data protection with Amazon Aurora

Compliance regulations such as PCI DSS require encrypting that data at rest throughout the data lifecycle. There are two aspects requiring encryption at rest with Aurora. First, we need to encrypt the database storage within RDS instances. Server-side encryption of database storage in Amazon Aurora utilizes the industry standard AES-256 encryption algorithm to encrypt your data at rest. You can use an AWS managed Customer Master Key (CMK), or you can create customer managed CMKs. To manage the CMKs used for encrypting and decrypting your Amazon Aurora resources, you use the AWS Key Management Service (AWS KMS). After your data is encrypted, Amazon Aurora handles authentication of access and decryption of your data transparently. You can select ‘Enable encryption’ on the console, CLI, or via API. For more information, see Encrypting Amazon Aurora resources.

Aurora automated backups are always enabled and are retained for up to 35 days. Snapshots are used if you want to keep backups longer than 35 days – typically used as long-term retention. We recommend customers  enable encryption at rest for your snapshots. Once you’ve enabled encryption on your Aurora clusters, snapshots taken from that storage volume are automatically encrypted using the same AWS Key Management Service (KMS) key used at the cluster level. Customers can also copy snapshots between regions and/or accounts to further limit the blast radius in the case of account exposure.

Encryption in transit can be accomplished in Amazon Aurora using Transport Layer Security (TLS) from your application to an Aurora cluster. This ensures that the data between your clients and Aurora database cluster is encrypted over the network. You can configure encryption in transit using provided certificate bundles and following the respective instructions for Aurora MySQL and Aurora PostgreSQL. We recommend that customers set minimum TLS versions to 1.2 or higher. This can be done in both Aurora MySQL and Aurora PostreSQL.

Isolation of compute environments with Amazon Aurora

Each database server has a VM-enforced isolation boundary and does not share the underlying kernel, CPU resources, memory resources, or elastic network interface with another server.

Customers can apply network-level controls, such as security groups and network ACLs, to their containerized workloads. Amazon Aurora does this by creating an elastic network interface (ENI) in their specified VPC and attaching it to the managed instance. This gives customers control over the network-level access of the services within Aurora. For instance, we frequently see customers define a security group that encompasses their web applications (e.g., sg-0171f59f7a4b4f04c), and later add that group directly to their Aurora Security Group (see Figure 2). This approach simplifies rule management and firewall database access. Customers that choose to configure their security groups to a restricted set of Availability Zones are responsible for adjusting these rules in the event of a scale out to an additional Availabilty Zone. For more information, see Security groups for your VPC and Network ACLs.


Figure 2: Aurora Security Group

Aurora is part of the Amazon Relational Database Services (RDS) family.  AWS PrivateLink enables you to privately access Amazon RDS API operations without an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. Instances in your VPC don’t need public IP addresses to communicate with Amazon RDS API endpoints to launch, modify, or terminate DB clusters. Your instances also don’t need public IP addresses to use any of the available RDS API operations. Traffic between your VPC and Amazon RDS doesn’t leave the Amazon network.

Additionally, customers can improve their security posture by attaching a least privilege consistent endpoint policy to their VPC endpoint that controls access to Amazon Aurora. These features enable customers to restrict API calls to Aurora from only specific caller contexts (e.g., IP-Range filtering). For more information, see the complete list of details and considerations when using VPC Endpoints with RDS. The following policy denies a specific account from calling Amazon Aurora.

{
  "Statement": [
    {
      "Id": "DenySpecificAccount",
      "Action": "*",
      "Effect": "Deny",
      "Resource": "*",
      "Principal": {
        "AWS": [
          "123456789012"
        ]
      }
    }
  ]
}

Automating audits with APIs with Amazon Aurora

Customers need services and capabilities to assess their Aurora resources’ compliance status. Implementing AWS Config rules ensures compliance with specific configurations. AWS Config monitors the configuration of resources and provides some out-of-the-box rules to alert when resources fall into a non-compliant state. Customers can enable AWS Config in their account using the AWS Config console or the AWS Command Line Interface (AWS CLI). They can select the Amazon container resource types for which they want to track configuration changes, such as RDS::DBCluster, RDS::DBInstance, and RDS::DBSnapshot. AWS Config allows for both managed rules and custom rules, enabling customers to build complex audits given their specific business needs. Some examples of audits on RDS with AWS Config managed rules include:

  • One example is rds-snapshot-encrypted. Checks whether Amazon Relational Database Service (Amazon RDS) DB snapshots are encrypted. The rule is NON_COMPLIANT, if the Amazon RDS DB snapshots are not encrypted.
  • A second example is rds-storage-encrypted. Checks whether storage encryption is enabled for your RDS DB instances.
  • A third example  is rds-instance-public-access-check. Checks whether the Amazon Relational Database Service instances are not publicly accessible. The rule is NON_COMPLIANT if the publiclyAccessible field is true in the instance configuration item.
  • A fourth example is rds-snapshots-public-prohibited. Checks if Amazon Relational Database Service (Amazon RDS) snapshots are public. The rule is NON_COMPLIANT if any existing and new Amazon RDS snapshots are public.

You can view more details on these managed config rules here.

Besides managed rules in Config, customers can build custom Config rules using API calls related to RDS recorded by AWS CloudTrail. AWS CloudTrail is an AWS service that helps customers enable governance, compliance, and operational and risk auditing of their AWS account. CloudTrail provides an aggregated repository of AWS API calls and changes to resources for over 160 AWS services. AWS CloudTrail records API calls made to the RDS service. Following are a few key APIs to ensure only approved database changes occur.

Monitoring these APIs in CloudTrail ensures that only appropriate actions are taking place against the your Aurora databases. For a complete list of RDS APIs review the Amazon RDS API References.

Following is an example of what a CloudTrail log looks like for the CreateDBInstance API:

{
  "eventVersion": "1.04",
  "userIdentity": {
    "type": "IAMUser",
    "principalId": "AKIAIOSFODNN7EXAMPLE",
    "arn": "arn:aws:iam::123456789012:user/johndoe",
    "accountId": "123456789012",
    "accessKeyId": "AKIAI44QH8DHBEXAMPLE",
    "userName": "johndoe"
  },
  "eventTime": "2018-07-30T22:14:06Z",
  "eventSource": "rds.amazonaws.com",
  "eventName": "CreateDBInstance",
  "awsRegion": "us-east-1",
  "sourceIPAddress": "192.0.2.0",
  "userAgent": "aws-cli/1.15.42 Python/3.6.1 Darwin/17.7.0 botocore/1.10.42",
  "requestParameters": {
    "enableCloudwatchLogsExports": [
      "audit",
      "error",
      "general",
      "slowquery"
    ],
    "dBInstanceIdentifier": "test-instance",
    "engine": "mysql",
    "masterUsername": "myawsuser",
    "allocatedStorage": 20,
    "dBInstanceClass": "db.m1.small",
    "masterUserPassword": "*"
  },
  "responseElements": {
    "dBInstanceArn": "arn:aws:rds:us-east-1:123456789012:db:test-instance",
    "storageEncrypted": false,
    "preferredBackupWindow": "10:27-10:57",
    "preferredMaintenanceWindow": "sat:05:47-sat:06:17",
    "backupRetentionPeriod": 1,
    "allocatedStorage": 20,
    "storageType": "standard",
    "engineVersion": "5.6.39",
    "dbInstancePort": 0,
    "optionGroupMemberships": [
      {
        "status": "in-sync",
        "optionGroupName": "default:mysql-5-6"
      }
    ],
    "dBParameterGroups": [
      {
        "dBParameterGroupName": "default.mysql5.6",
        "parameterApplyStatus": "in-sync"
      }
    ],
    "monitoringInterval": 0,
    "dBInstanceClass": "db.m1.small",
    "readReplicaDBInstanceIdentifiers": [],
    "dBSubnetGroup": {
      "dBSubnetGroupName": "default",
      "dBSubnetGroupDescription": "default",
      "subnets": [
        {
          "subnetAvailabilityZone": {
            "name": "us-east-1b"
          },
          "subnetIdentifier": "subnet-cbfff283",
          "subnetStatus": "Active"
        },
        {
          "subnetAvailabilityZone": {
            "name": "us-east-1e"
          },
          "subnetIdentifier": "subnet-d7c825e8",
          "subnetStatus": "Active"
        }
      ],
      "vpcId": "vpc-136a4c6a",
      "subnetGroupStatus": "Complete"
    },
    "masterUsername": "myawsuser",
    "multiAZ": false,
    "autoMinorVersionUpgrade": true,
    "engine": "mysql",
    "cACertificateIdentifier": "rds-ca-2015",
    "dbiResourceId": "db-ETDZIIXHEWY5N7GXVC4SH7H5IA",
    "dBSecurityGroups": [],
    "pendingModifiedValues": {
      "masterUserPassword": "*",
      "pendingCloudwatchLogsExports": {
        "logTypesToEnable": [
          "audit",
          "error",
          "general",
          "slowquery"
        ]
      }
    },
    "dBInstanceStatus": "creating",
    "publiclyAccessible": true,
    "domainMemberships": [],
    "copyTagsToSnapshot": false,
    "dBInstanceIdentifier": "test-instance",
    "licenseModel": "general-public-license",
    "iAMDatabaseAuthenticationEnabled": false,
    "performanceInsightsEnabled": false,
    "vpcSecurityGroups": [
      {
        "status": "active",
        "vpcSecurityGroupId": "sg-f839b688"
      }
    ]
  },
  "requestID": "daf2e3f5-96a3-4df7-a026-863f96db793e",
  "eventID": "797163d3-5726-441d-80a7-6eeb7464acd4",
  "eventType": "AwsApiCall",
  "recipientAccountId": "123456789012"
}
 FSI customers can use 
       AWS Audit Manager to continuously audit their AWS usage and simplify how they assess risk and compliance with regulations and industry standards. AWS Audit Manager automates evidence collection and organizes the evidence as defined by the control set in the  
       framework selected such as PCI-DSS, SOC 2, and GDPR. Audit Manager collects data from sources including 
       AWS CloudTrail to compare the environment’s configurations against the compliance controls. By logging all Aurora calls in CloudTrail, Audit Manager’s integration with CloudTrail becomes advantageous when needing to ensure that controls have been met. Consider the encryption requirement in SOC 2, for example. Rather than querying across all CloudTrail logs to ensure the S3 bucket for Aurora’s output is encrypted, customers can centrally see whether the requirement is being met in Audit Manager. Audit Manager saves time with automated collection of evidence and provides audit-ready reports for customers to review. The Audit Manager assessment report uses cryptographic verification to help you ensure the integrity of the assessment report. The following screenshot illustrates the configuration of a custom control for a data source for the Amazon Aurora action of interest. 
        Following is an example of evidence in Audit Manager from a custom Aurora control. The clusterStatus field shows “creating” which highlights a new Aurora cluster has been created as defined by the custom control. 
       
{
  "clusterIdentifier": "aurora-cluster-2",
  "nodeType": "r5.4xlarge",
  "clusterStatus": "creating",
  "clusterAvailabilityStatus": "Modifying",
  "masterUsername": "awsuser",
  "dBName": "dev",
  "automatedSnapshotRetentionPeriod": 1,
  "manualSnapshotRetentionPeriod": -1,
  "clusterSecurityGroups": [],
  "vpcSecurityGroups": [
    {
      "vpcSecurityGroupId": "sg-584c4d10",
      "status": "active"
    }
  ],
  "clusterParameterGroups": [
    {
      "parameterGroupName": "default.aurora-1.0",
      "parameterApplyStatus": "in-sync"
    }
  ],
  "clusterSubnetGroupName": "default",
  "vpcId": "vpc-86a4affd",
  "preferredMaintenanceWindow": "sun:00:00-sun:00:30",
  "pendingModifiedValues": {
    "masterUserPassword": "****"
  },
  "clusterVersion": "1.0",
  "allowVersionUpgrade": true,
  "numberOfNodes": 2,
  "publiclyAccessible": false,
  "encrypted": false,
  "tags": [],
  "enhancedVpcRouting": false,
  "iamRoles": [],
  "maintenanceTrackName": "current",
  "deferredMaintenanceWindows": [],
  "nextMaintenanceWindowStartTime": "Jun 20, 2021 12:00:00 AM"
}

Operational access and security with Amazon Aurora

In the previous section we discussed detection methods, however it is important to utilize prevention methods to have API calls fail when unauthorized access is occurring. When you are securing your Aurora databases, consider three areas to create Least- Privilege AWS Identity and Access Management (IAM) roles:

  • Development users – These are the developers that use Aurora on a day-to-day basis in order to build their applications.
  • Service administrators – This is typically a team or individual within an organization that is in charge of Aurora resources and determines developers’ permissions to Aurora.
  • Application resources – The application resources that can read from and write to Aurora clusters within an AWS environment. Typical examples are EC2 instances, ECS tasks, EKS pods, and Lambda functions.

Service administrators are the individuals or teams responsible for securing and creating Aurora clusters within AWS environments. Typically they will create the IAM permissions for service users to ensure the downstream users are following the principle of least privilege.

Service users are the individuals (developers, database administrators, etc.) that access and modify the Aurora cluster on a day-to-day basis in order to build their applications. Their IAM policies are created and scoped by the service administrator depending on their job role and access needs. Examples of these policies are read-only console access, creation of DB instances within specific AWS account, prevention of deleting a DB instance, and many more. There are also managed policies that customers can use if they have basic separation of duties, however we recommend using the managed policy as a baseline, and modifying them to create your own custom policies based on your business needs.

The following AWS managed policies, which you can attach to users in your account, are specific to Amazon RDS:

  • AmazonRDSReadOnlyAccess – Grants read-only access to all Amazon RDS resources for the AWS account specified.
  • AmazonRDSFullAccess – Grants full access to all Amazon RDS resources for the AWS account specified.

Application resources refers to the database connections initiated by resources either within AWS or on-premises. For these scenarios we want to ensure that only authorized applications are accessing the Aurora cluster. There are three methods for doing this securely. First, you can utilize a standard password that your application pulls programmatically from AWS Secrets Manager. This ensures passwords are not stored in plaintext within the codebase and can be rotated programmatically via Secrets Manager. Second, you can utilize IAM database authentication where you don’t need to use a password when you connect to a DB cluster. Instead, you use an authentication token via IAM. This method is best for temporary (15 minutes or less), personal access to the database. Third, you can utilize external authentication of database users using Kerberos and Microsoft Active Directory. Customers use any one of these three (and combinations of them) to meet their business requirements while using Aurora.

Access control to the Aurora resources doesn’t stop with AWS constructs. Customers should ensure they have created least privileged developer access within their database engine of choice. For Aurora PostgreSQL commands such as CREATE ROLE, ALTER ROLE, GRANT, and REVOKE work just as they do in on-premises databases, as does directly modifying database schema tables. For Aurora MySQL commands CREATE USER, RENAME USER, GRANT, REVOKE, and SET PASSWORD should be used to limit access control within the engine.

Furthermore, customers should consider utilizing Service control policies (SCPs) within AWS Organizations. SCPs offer central control over the maximum available permissions for all accounts in the customer’s organization. Unlike IAM policies, SCPs are guardrails, which allow customers to set the maximum privilege within an account or set of accounts regardless of the IAM roles created within them. An example might be limiting the creation of Aurora databases to only within a specific region(s) where the customer operates. Another use case would include requiring Multi-Factor Authentication (MFA) through an IAM condition key on specific administration tasks. The following SCP shows both of these examples in action.

{
  "Version": "2012-10-17",
  "Id": "Aurora-Scp-Example",
  "Statement": [
    {
      "Sid": "DenyNonApprovedRegions",
      "Effect": "Deny",
      "Principal": "*",
      "Action": [
        "rds:CreateInstance",
        "rds:CreateDBInstance"
      ],
      "Resource": "*",
      "Condition": {
        "StringNotEquals": {
          "aws:RequestedRegion": [
            "us-east-1",
            "us-west-2"
          ]
        }
      }
    },
    {
      "Sid": "DenyStopAndTerminateWhenMFAIsNotPresent",
      "Effect": "Deny",
      "Principal": "*",
      "Action": [
        "rds:StopInstance",
        "rds:StopDBCluster",
        "rds:DeleteInstance",
        "rds:DeleteDBCluster"
      ],
      "Resource": "*",
      "Condition": {
        "BoolIfExists": {
          "aws:MultiFactorAuthPresent": false
        }
      }
    }
  ]
}

Conclusion

In this post, we reviewed Amazon Aurora, highlighting essential information that can help FSI customers accelerate the service’s approval within these five categories: achieving compliance, data protection, isolation of computing environments, automating audits with APIs, and operational access and security. While not a one-size-fits-all approach, the guidance can be adapted to meet the organization’s security and compliance requirements. We also provided a consolidated list of crucial areas for Aurora databases.

Be sure to visit our AWS Industries blog channel and stay tuned for more financial services news and best practices.

Anthony Pasquariello

Anthony Pasquariello

Anthony is a Senior Solutions Architect at AWS based in New York City. He specializes in modernization and security for our advanced enterprise customers. Anthony enjoys writing and speaking about all things cloud. He’s pursuing an MBA, and received his MS and BS in Electrical & Computer Engineering.

Nate Bachmeier

Nate Bachmeier

Nate is a Sr. Solutions Architect at AWS that nomadically explores New York City one cloud integration at a time. He works with enterprise customers, helping them migrate to the cloud and adopt cutting edge technologies.