AWS Database Blog

Enable fine-grained access control and observability for API operations in Amazon DynamoDB

Customers choose Amazon DynamoDB to improve their applications’ performance, scalability, and resiliency. DynamoDB’s serverless architecture simplifies operations by abstracting hardware, scaling, patches, and maintenances. Managing data access and security in DynamoDB is different than instance-based database solutions. DynamoDB uses AWS Identity and Access Management (IAM) to authenticate and authorize access to resources, whereas RBDMS solutions rely on firewalls rules, password authentication, and database connection management.

In this post, we discuss how to use fine-grained access control (FGAC) implement least privilege access control to trusted IAM entities. We demonstrate how to configure FGAC with condition-based IAM polices on DynamoDB. We provide you sample AWS IAM Identity Center permission sets for fine-grained access control to DynamoDB based on attributes like AWS Regions and tags. We also show how to integrate DynamoDB with AWS CloudTrail to log DynamoDB control plane and data plane API actions. For example, you can log DeleteItem events in CloudTrail logs, create an Amazon CloudWatch metric filter, and create a CloudWatch alarm. We also provide a HashiCorp Terraform infrastructure as code (IaC) template that automates the creation of these resources.

Fine-grained access control in DynamoDB

At a high level, the steps to implement fine-grained access control are as follows:

  1. Create an IAM Identity Center permission set to implement least privilege access using conditions and prefixes in IAM policies.
  2. Select the AWS account in IAM Identity Center, assign users and groups, and assign the permission set.

DynamoDB is a fully managed database service that lets you offload the administrative burdens of operating and scaling a hosted database. However, to make sure authorized users and applications have access to create and modify the data in DynamoDB, you need to implement access control effectively with the help of AWS services. Before diving into the specifics, it’s essential to understand the basics.

IAM is the foundation of AWS security. It allows you to control who can perform actions on your AWS resources and what those actions can be. You can deny or grant permissions to IAM users, groups, or roles.

In addition to IAM, DynamoDB supports resource-based policies. Resource-based policies let you define access permissions by specifying who has access to each resource, and the actions they are allowed to perform on each resource. Resource-based policies also support integrations with IAM Access Analyzer and Block Public Access (BPA) capabilities. Resource-based policies vastly simplify cross-account access control. To explore different DynamoDB authorization scenarios and how you can implement resource-based policies to solve them, refer to Simplify cross-account access control with Amazon DynamoDB using resource-based policies.

Implement fine-grained access control

DynamoDB enables you to implement fine-grained access control down to the level of individual items and attributes within a table. Complete the following steps:

  1. Identify who should have access to your DynamoDB table and what level of access they require. Determine the read and write operations they need to perform.
  2. Create IAM users, groups, or roles for the services and individuals that require access to DynamoDB.
  3. Create IAM policies and attach it to the IAM users or roles.

The following is a sample policy that denies operation of deleting tags from a specific table called TestTable:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "DenyDeleteTag2",
      "Effect": "Deny",
      "Action": [
        "dynamodb:UntagResource"
      ],
      "Resource": [
        "arn:aws:dynamodb:us-east-1:123456789012:table/TestTable"
      ]
    }
  ]
}

The following policy grants access to all DynamoDB actions for TestTable:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "AllowedResourcesWithPrefixApp",
      "Effect": "Allow",
      "Action": [
        "dynamodb:*"
      ],
      "Resource": [
        "arn:aws:dynamodb:us-east-1:123456789012:table/TestTable*"
      ]
    }
  ]
}

DynamoDB supports condition expressions that allow you to define fine-grained access control within your tables. You can specify conditions that must be met for an operation to succeed. The following sample policy allows a user to access to specific attributes where the partition key value matches their user ID for the table TestTable. This ID, ${www.amazon.com:user_id}, is a substitution variable.

{
   "Version":"2012-10-17",
   "Statement":[
      {
         "Sid":"AllowAccessToOnlyItemsMatchingUserID",
         "Effect":"Allow",
         "Action":[
            "dynamodb:GetItem",
            "dynamodb:BatchGetItem",
            "dynamodb:Query",
            "dynamodb:PutItem",
            "dynamodb:UpdateItem",
            "dynamodb:DeleteItem",
            "dynamodb:BatchWriteItem"
         ],
         "Resource":[
            "arn:aws:dynamodb:us-east-1:123456789012:table/TestTable"
         ],
         "Condition":{
            "ForAllValues:StringEquals":{
               "dynamodb:LeadingKeys":[
                  "${www.amazon.com:user_id}"
               ],
               "dynamodb:Attributes":[
                  "attribute1",
                  "attribute2",
                  "attribute3"
               ]
            },
            "StringEqualsIfExists":{
               "dynamodb:Select":"SPECIFIC_ATTRIBUTES"
            }
         }
      }
   ]
}

Monitor DynamoDB control and data plane activity

DynamoDB is integrated with CloudTrail. By creating a trail, you can log DynamoDB control plane API actions as events in CloudTrail logs. To enable logging of data plane API actions in CloudTrail files, you need to enable logging of data events in CloudTrail. Data plane events can be filtered by resource type for granular control over which DynamoDB API calls you want to selectively log in CloudTrail.

At a high level, the steps are as follows:

  1. Enable CloudTrail to log DynamoDB API operations.
  2. Use CloudWatch metric filters to capture the desired API operation and turn it into a numerical CloudWatch metric.
  3. Create a CloudWatch alarm using the custom metric.
  4. Use Amazon Simple Notification Service (Amazon SNS) to send an email notification to subscribing endpoints or clients when the alarm you created changes its state.

The following diagram illustrates the different components of this architecture.

Monitor DynamoDB control and data plane activity

Prerequisites

For this walk-through, you need the following:

  • An AWS account
  • An IAM user or role with necessary privileges
  • Access to DynamoDB, CloudTrail, CloudWatch, and Amazon SNS

Log DynamoDB operations using CloudTrail

In this solution, you create a data event to log the API actions to a CloudWatch log group. This way, you can quickly determine which DynamoDB items were created, read, updated, or deleted, and identify the source of the API calls. If you detect unauthorized DynamoDB activity, you can also take immediate action to restrict access.

The following screenshot shows how to create a data event with Advanced event selector settings to capture the data plane operation DeleteItem in CloudTrail.

Capture the data plane operation DeleteItem in CloudTrail

The following example shows how to use advanced event selectors using the AWS Command Line Interface (AWS CLI):

aws cloudtrail put-event-selectors --trail-name TrailName \ --advanced-event-selectors '[ { "name": "findDeleteItems", "fieldSelectors": [ { "field": "eventCategory", "equals": [ "Data" ] }, { "field": "resources.type", "equals": [ "AWS::DynamoDB::Table" ] }, { "field": "eventName", "equals": [ "DeleteItem" ] } ] } ]

You can test this rule by creating an item in your DynamoDB table and then deleting it. This should produce a DeleteItem API operation in CloudTrail. After the traffic is generated, you can log in to the CloudWatch Logs console, select the log group, and search for the event DeleteItem, as shown in the following screenshot.

CloudWatch Log

The following are two sample queries that you can run on the CloudWatch Logs console:

  • Find AccessDenied operation: filter (errorCode ="AccessDenied" and eventSource="dynamodb.amazonaws.com" and eventName ="DescribeGlobalTable")
  • Find Unauthorized operation: filter (errorCode ="UnauthorizedOperation" and eventSource="dynamodb.amazonaws.com" and eventName ="DescribeGlobalTable")

Create a CloudWatch metric filter

You can use metric filters to transform log data into actionable metrics and create an alarm. Filters can be words, exact phrases, or numeric values. You can use regular expressions (regex) to create standalone filter patterns, or you can incorporate them with JSON and space-delimited filter patterns.

To create a metric filter for the event DeleteItem, refer to Create a metric filter for a log group.
Create a metric filter for a log group

Create a CloudWatch alarm

After you create the metric filter, you can create a CloudWatch alarm. For instructions, refer to Creating CloudWatch alarms for CloudTrail events: examples. On the Configure actions page, choose Notification, and then choose In alarm, which indicates that an action to send a notification to an SNS topic is taken when a threshold is breached.

Create an SNS topic

To create an SNS topic, follow the steps in Creating an Amazon SNS topic. Amazon SNS sends an email notification to subscribing endpoints or clients when your alarm changes its state.

Terraform code

We have provided the following code to automate the creation of a CloudTrail log for DynamoDB API operations, CloudWatch metric filter, and CloudWatch alarm. You can then create an SNS topic using your preferred notification channel.

Advanced observability for DynamoDB

You can use CloudWatch Contributor Insights to analyze log data and create time series that display contributor data. Contributor Insights provides built-in rules to analyze the performance metrics of DynamoDB. Contributor Insights logs frequently accessed keys and throttled keys for both partition and partition+sort key. This helps you find the most-used primary keys and understand who or what is impacting system performance. You have to explicitly enable Contributor Insights for your DynamoDB table.

Contributor Insights for your DynamoDB table

You can also use an Amazon EventBridge rule to monitor DynamoDB Data Definition Language (DDL) operations like CreateTable, DeleteTable, and UpdateTable to get notified via an SNS topic. The EventBridge rule watches for specific types of events. When a matching event occurs, the event is routed to the targets associated with the rule.

Clean up

To avoid ongoing charges in your AWS account, delete the DynamoDB, CloudTrail, CloudWatch, Amazon SNS, and any other resources you created as part of this post.

Conclusion

In this post, we provided guidelines on how to set fine-grained access control with condition-based IAM polices in DynamoDB. We also provided a Terraform IaC template to integrate DynamoDB with CloudTrail, log DynamoDB control and data plane API actions as events in CloudTrail logs, create a CloudWatch metric filter, and create a CloudWatch alarm.

With this solution, you can implement proactive as well as reactive monitoring best practices to apply proper security controls for DynamoDB. For more information about fine-grained access control, check out the video Fine-Grained Access Control in Amazon DynamoDB. For further reading, refer to AWS Well-Architected Framework, Best practices for designing and architecting with DynamoDB, and Optimizing costs on DynamoDB tables.


About the Authors

Arun Chandapillai is a Senior Architect who is a diversity and inclusion champion. He is passionate about helping his Customers accelerate IT modernization through business-first Cloud adoption strategies and successfully build, deploy, and manage applications and infrastructure in the Cloud. Arun is an automotive enthusiast, an avid speaker, and a philanthropist who believes in ‘you get (back) what you give’.

Parag Nagwekar is a Senior Cloud Infrastructure Architect with AWS Proserv. He works with customers to design and build solutions to help their journey to cloud and IT modernization. His passion is to provide scalable and highly available services for applications in the cloud. He has experience in multiple disciplines including system administration, DevOps, distributed architecture, and cloud architecting.