AWS Security Blog

How to Audit Cross-Account Roles Using AWS CloudTrail and Amazon CloudWatch Events

You can use AWS Identity and Access Management (IAM) roles to grant access to resources in your AWS account, another AWS account you own, or a third-party account. For example, you may have an AWS account used for production resources and a separate AWS account for development resources. Throughout this post, I will refer to these as the Production account and the Development account. Developers often want some degree of access to production resources in the Production account. To control access, you would create a role in the Production account that allows restricted access to resources in that account. Developers can assume the role and access resources in the Production account.

In this blog post, I will walk through the process of auditing access across AWS accounts by a cross-account role. This process links API calls that assume a role in one account to resource-related API calls in a different account. To develop this process, I will use AWS CloudTrail, Amazon CloudWatch Events, and AWS Lambda functions. When complete, the process will provide a full audit chain from end user to resource access across separate AWS accounts.

The following diagram shows the process workflow of the solution discussed in this post.

Diagram of the process workflow of the solution discussed in this post

In this workflow:

  1. An IAM user connects to the AWS Security Token Service (AWS STS) and assumes a role in the Production account.
  2. AWS STS returns a set of temporary credentials.
  3. The IAM user uses the set of temporary credentials to access resources and services in the production account.

You can extend this model to third-party accounts. For example, you may want to give a managed services provider (MSP) limited access to your AWS account to manage your resources. This scenario is similar to the first in that the MSP assumes a role in your AWS account to gain access to resources.

For more information about these two use cases, see Tutorial: Delegate Access Across AWS Accounts Using IAM Roles and How to Use an External ID When Granting Access to Your AWS Resources to a Third Party.

Solution overview

Security teams often want to track and audit who is assuming a role and what specifically that person did to resources after they assumed the role. CloudTrail provides a partial solution to this problem by recording AWS API calls for your account and delivering log files to you. The recorded information includes the identity of the API caller, the time of the API call, the source IP address of the API caller, the request parameters, and the response elements returned by the AWS service.

However, as I will explain below, there are some missing pieces. Here is a sample CloudTrail record from the Production account with the fictitious account ID, 999999999999.

{
   "eventVersion":"1.03",
   "userIdentity":{
      "type":"AssumedRole",
      "principalId":"AROAICKBBQTXWLOLJLHW4:TestSessionCrossAccount",
      "arn":"arn:aws:sts:: 999999999999:assumed-role/CrossAccountTest/TestSessionCrossAccount",
      "accountId":"999999999999",
      "accessKeyId":"ASIAJJQOJ64OAM7C65AA",
      "sessionContext":{
         "attributes":{
            "mfaAuthenticated":"false",
            "creationDate":"2016-04-05T20:39:37Z"
         },
         "sessionIssuer":{
            "type":"Role",
            "principalId":"AROAICKBBQTXWLOLJLHW4",
            "arn":"arn:aws:iam::999999999999:role/CrossAccountTest",
            "accountId":"999999999999",
            "userName":"CrossAccountTest"
         }
      }
   },
   "eventTime":"2016-04-05T20:39:39Z",
   "eventSource":"s3.amazonaws.com",
   "eventName":"ListBuckets",
   "awsRegion":"us-east-1",
   "sourceIPAddress":"AWS Internal",
   "userAgent":"[Boto3/1.3.0 Python/3.4.4 Darwin/15.4.0 Botocore/1.4.8 Resource]",
   "requestParameters":null,
   "responseElements":null,
   "requestID":"2ED523DEA5409137",
   "eventID":"099ff820-00f2-4500-94fc-2b9da26d576f",
   "eventType":"AwsApiCall",
   "recipientAccountId":"999999999999"
}

In the preceding example, I can see that someone used an AssumedRole credential with the CrossAccountTest role to make the API call ListBuckets (see the blue highlights). However, what is missing from the CloudTrail record is information about the user that assumed the role in the first place. I need this information to determine who made the ListBuckets API call.

The information about who assumed the role is only available in the Development account. The following is a sample CloudTrail record from that account.

{
     "version": "0",
     "id": "c204c067-a376-47a8-a760-f0bf97b89aae",
     "detail-type": "AWS API Call via CloudTrail",
     "source": "aws.sts",
     "account": "1111111111111",
     "time": "2016-04-05T20:39:37Z",
     "region": "us-east-1",
     "resources": [],
     "detail": {
         "eventVersion": "1.04",
         "userIdentity": {
             "type": "IAMUser",
             "principalId": "AIDAIDVUOOO7V6R6HKL6E",
             "arn": "arn:aws:iam::1111111111111:user/jsmith",
             "accountId": "1111111111111",
             "accessKeyId": "AKIAJ2DZP3QVQ3D6VJBQ",
             "userName": "jsmith"
         },
         "eventTime": "2016-04-05T20:39:37Z",
         "eventSource": "sts.amazonaws.com",
         "eventName": "AssumeRole",
         "awsRegion": "global",
         "sourceIPAddress": "72.21.196.66",
         "userAgent": "Boto3/1.3.0 Python/3.4.4 Darwin/15.4.0 Botocore/1.4.8",
         "requestParameters": {
             "roleArn": "arn:aws:iam::999999999999:role/CrossAccountTest",
             "roleSessionName": "TestSessionCrossAccount",
             "externalId": "3414"
         },
         "responseElements": {
             "credentials": {
                 "accessKeyId": "ASIAJJQOJ64OAM7C65AA",
                 "expiration": "Apr 5, 2016 9:39:37 PM",
                 "sessionToken": "FQoDYXdzEH4aDLnt4a+IhSowXRB+0iLXATIl"
             },
             "assumedRoleUser": {
                 "assumedRoleId": "AROAICKBBQTXWLOLJLHW4:TestSessionCrossAccount",
                 "arn": "arn:aws:sts::999999999999:assumed-role/CrossAccountTest/TestSessionCrossAccount"
             }
         },
         "requestID": "83a263cd-fb6e-11e5-88cf-c19f9d99b57d",
         "eventID": "e5a09871-dc41-4979-8cd3-e6e0fdf5ebaf",
         "eventType": "AwsApiCall"
     }
 }

In this record, I see that the user jsmith called the AssumeRole API for the role CrossAccountTest. This role exists in the Production account. We now know user jsmith assumed the role and then later called ListBuckets in the Production account.

Because multiple users are able to assume the same role, I need to determine that it was jsmith and not some other user. To do this, make careful note of the accessKeyId (highlighted in green in the preceding record). AWS STS provided this key to jsmith when he assumed the role. This is a temporary set of credentials that jsmith then used in the Production account. Notice that the two accessKeyIds are the same in the two CloudTrail records. This pairing is how I linked the ListBucket action to the user performing the action, jsmith.

Now, that we have all the information I need, I can look at how a security operations person in the Production account can easily link the two records. The previous method requires manual reviews of CloudTrail records. This becomes complicated if I extend the use case to third parties. In this case, instead of a Development account that the security professional may have access to, a third party owns the account and they will not allow access to their internal CloudTrail records.

Therefore, how do you transfer the CloudTrail records for this role from one account to another? I will use the majority of this blog post to walk through this solution. The solution involves using CloudWatch Events in the Development account to publish CloudTrail records to Amazon Simple Notification Service (Amazon SNS). The SNS-Cross-Account Lambda function in the Production account then subscribes to this Amazon SNS topic and receives the CloudTrail records related to assuming the specific role in question. Using the S3-Cross-Account Lambda function in the Production account, I parse the CloudTrail logs, looking for relevant CloudTrail events related to the specific role. Finally, for auditing and reporting, I store the two audit records in Amazon DynamoDB where the two records are linked together.

The following diagram shows the entire process workflow.

Diagram of the entire process workflow

In this workflow:

  1. An IAM user connects to AWS STS and assumes a role in the Production account.
  2. AWS STS returns a set of temporary credentials.
  3. The IAM user uses the set of temporary credentials to access resources and services in the Production account.
  4. CloudWatch Events detects the event emitted by CloudTrail when the AssumeRole API is called in Step 1.
  5. CloudWatch Events then publishes a message to SNS.
  6. SNS in the Development account then triggers the SNS-Cross-Account Lambda function in the Production account.
  7. The SNS-Cross-Account Lambda function extracts the CloudTrail record and saves the information to DynamoDB. The information saved includes information about the IAM user who assumed the role in the Production account.
  8. When the IAM user assumed the role (Step 2) and uses temporary credentials to access resources in the Production account, a record of the API call is logged to CloudTrail.
  9. CloudTrail saves the records to an Amazon S3 bucket.
  10. Using S3 event notifications, CloudTrail triggers the S3-Cross-Account  Lambda function each time CloudTrail saves records to S3.
  11. The S3-Cross-Account Lambda function downloads the CloudTrail records from S3, unzips them, and parses the logs for records related to the role in the Production account. The S3-Cross-Account Lambda function saves in DynamoDB any CloudTrail-relevant records. The information saved will be used to link to the information saved in Step 8 and to track to the IAM user who assumed the role.

Note that I could have built a similar workflow using CloudWatch Logs destinations. For more information, see Cross-Account Log Data Sharing with Subscriptions.

Deploying the solution

Prerequisites

You will need two AWS accounts for the following walkthrough. For simplicity, the two accounts will be labeled Production and Development with fictitious AWS account IDs of 999999999999 and 111111111111, respectively. These two accounts will be abbreviated prod and dev.

This walkthrough assumes that you have configured separate AWS CLI named profiles for each of these two accounts. This is important because you will be switching between accounts, and using profiles will make this much easier. For more information, see Named Profiles.

The walkthrough

Step 1: Create the cross-account role

I put the following trust policy document in a file named cross_account_role_trust_policy.json. (Through the rest of this blog post, I have highlighted in red placeholder values. Be sure to replace those placeholder values with your own values.)

{
   "Version": "2012-10-17",
   "Statement": [
     {
       "Effect": "Allow",
       "Principal": {
         "AWS": "arn:aws:iam::111111111111:root"
       },
       "Action": "sts:AssumeRole",
       "Condition": {
         "StringEquals": {
           "sts:ExternalId": "3414"
         }
       }
     }
   ]
 }

Note that I am specifying an ExternalId to further restrict who can assume the role. With this restriction in place, to assume the role a user must be a member of the Development account and know the ExternalId value.

I then create the cross-account role in the prod account.

aws iam create-role --profile prod 
     --role-name CrossAccountTest 
     --assume-role-policy-document file://cross_account_role_trust_policy.json

Next, I need to assign a resource policy to the role so that it can access resources in the Production account. Here, I attach the AWS managed policy, AmazonS3ReadOnlyAccess. You can add a more or less restrictive policy to meet your organization’s needs.

aws iam attach-role-policy --profile prod 
     --role-name CrossAccountTest 
     --policy-arn arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess

Step 2: Create a DynamoDB table

I create the DynamoDB table in the prod account to store the CloudTrail records from both the Development and Production accounts. I am using the eventID as a HASH key. I am also using the time that the event occurred as a time stamp, and I am saving the accessKeyId as an attribute, which will make reporting and event correlation easier.

aws dynamodb create-table --profile prod 
     --table-name CrossAccountAuditing 
     --attribute-definitions 
         AttributeName=eventID,AttributeType=S 
         AttributeName=eventTime,AttributeType=S 
     --key-schema AttributeName=eventID,KeyType=HASH AttributeName=eventTime,KeyType=RANGE 
     --provisioned-throughput ReadCapacityUnits=1,WriteCapacityUnits=1

Step 3: Create the SNS topic

CloudWatch Events will send a record to an SNS topic. This SNS topic will then publish messages to the SNS-Cross-Account Lambda function in the Production account. To create the SNS topic in the dev account, I run the following command.

aws sns create-topic --name CrossAccountSNS --profile dev

Step 4: Create CloudWatch rule

The CloudWatch rule will detect the AssumeRole API call in the Development account and then publish details for this API call to the SNS topic I just created. To do this, I first create the CloudWatch rule.

aws events put-rule --profile dev 
           --name CrossAccountRoleToProd 
           --event-pattern  file://cw_event_pattern.json 
           --state ENABLED 
           --description "Detects AssumeRole API calls and sends event to Production Account"

Here, I am referencing a JSON document that has the following rule-matching pattern. I put the document in a file named cw_event_pattern.json.

{
   "detail-type": [
     "AWS API Call via CloudTrail"
   ],
   "detail": {
     "eventSource": [
       "sts.amazonaws.com"
     ],
     "eventName": [
       "AssumeRole"     ],
     "requestParameters": {
       "externalId": [
         "3414"
       ]
     }
   }
 }

The pattern will match on an event source coming from AWS STS, the AssumeRole API call, and the externalID of 3414, which is passed as a parameter to the AssumeRole call. All of these filters must match to trigger the rule successfully.

I could have matched on other fields as well. The following is a sample CloudWatch Events record.

{
     "version": "0",
     "id": "c204c067-a376-47a8-a760-f0bf97b89aae",
     "detail-type": "AWS API Call via CloudTrail",
     "source": "aws.sts",
     "account": "844014150832",
     "time": "2016-04-05T20:39:37Z",
     "region": "us-east-1",
     "resources": [],
     "detail": {
         "eventVersion": "1.04",
         "userIdentity": {
             "type": "IAMUser",
             "principalId": "AIDAIDVUOOO7V6R6HKL6E",
             "arn": "arn:aws:iam::111111111111:user/jsmith",
             "accountId": "111111111111",
             "accessKeyId": "AKIAJ2DZP3QVQ3D6VJBQ",
             "userName": "jsmith "
         },
         "eventTime": "2016-04-05T20:39:37Z",
         "eventSource": "sts.amazonaws.com",
         "eventName": "AssumeRole",
         "awsRegion": "global",
         "sourceIPAddress": "72.21.196.66",
         "userAgent": "Boto3/1.3.0 Python/3.4.4 Darwin/15.4.0 Botocore/1.4.8",
         "requestParameters": {
             "roleArn": "arn:aws:iam::999999999999:role/CrossAccountTest",
             "roleSessionName": "TestSessionCrossAccount",
             "externalId": "3414"
         },
         "responseElements": {
             "credentials": {
                 "accessKeyId": "ASIAJJQOJ64OAM7C65AA",
                 "expiration": "Apr 5, 2016 9:39:37 PM",
                 "sessionToken": "FQoDYXdzEH4aDLnt4a+IhSowXRB+0iLXATIl"
             },
             "assumedRoleUser": {
                 "assumedRoleId": "AROAICKBBQTXWLOLJLHW4:TestSessionCrossAccount",
                 "arn": "arn:aws:sts::9999999999999:assumed-role/CrossAccountTest/TestSessionCrossAccount"
             }
         },
         "requestID": "83a263cd-fb6e-11e5-88cf-c19f9d99b57d",
         "eventID": "e5a09871-dc41-4979-8cd3-e6e0fdf5ebaf",
         "eventType": "AwsApiCall"
     }
 }

Instead of matching on the externalId, I could have matched on the specific role and captured only events related to that role. To configure such a rule, I would have changed the matching event pattern to the following.

{
   "detail-type": [
     "AWS API Call via CloudTrail"
   ],
   "detail": {
     "eventSource": [
       "sts.amazonaws.com"     ],
     "eventName": [
       "AssumeRole"     ],
     "requestParameters": {
       "roleArn": [
         "arn:aws:iam::999999999999:role/CrossAccountTest"
       ]
     }
   }
 }

Now that I have created the rule, I need to add a target to the rule. This tells the CloudWatch rule where to send matching events. In this case, I am sending matching events to the SNS topic that I created previously.

aws events put-targets --profile dev 
           --rule CrossAccountRoleToProd 
           --targets Id=1,Arn=arn:aws:sns:us-east-1:999999999999:CrossAccountSNS

Step 5: Create the Lambda-SNS function

I will subscribe the SNS-Cross-Account Lambda function in the Production account to the SNS topic in the Development account. The SNS-Cross-Account function will process CloudWatch Events that are triggered when the role is assumed.

The SNS-Cross-Account Lambda function parses the incoming SNS message and then saves the CloudTrail record to DynamoDB. Save this code as lambda_function.py. Then zip that file into LambdaWithSNS.zip.

import json
import logging 
import boto3 

DYNAMODB_TABLE_NAME = "CrossAccountAuditing" 

logger = logging.getLogger() 
logger.setLevel(logging.INFO) 

DYNAMO = boto3.resource("dynamodb") 
TABLE = DYNAMO.Table(DYNAMODB_TABLE_NAME) 

logger.info('Loading function') 

def save_record(record):
     """
     Save the record to DyanmoDB
     :param record:
     :return:
     """
     logger.info("Saving record to DynamoDB...")
     TABLE.put_item(
        Item={
             'accessKeyId': record['detail']['responseElements']['credentials']['accessKeyId'],
             'eventTime': record['detail']['eventTime'],
             'eventID': record['detail']['eventID'],
             'record': json.dumps(record)
         }
     )
     logger.info("Saved record to DynamoDB")

def lambda_handler(event, context):
     # Loop through records delivered by SNS
     for record in event['Records']:
         # Extract the SNS message from the record
         message = record['Sns']['Message']
         logger.info("SNS Message: {}".format(message))
         save_record(json.loads(message))

Next, I need to create the execution role that Lambda will use when it runs. To do this, I first create the role.

aws iam create-role --profile prod 
     --role-name LambdaSNSExecutionRole 
     --assume-role-policy-document file://lambda_trust_policy.json

The trust policy for this role allows Lambda to assume the role. I put the following trust policy document in a file named lambda_trust_policy.json.

{
   "Version": "2012-10-17",
   "Statement": [
     {
       "Sid": "",
       "Effect": "Allow",
       "Principal": {
         "Service": "lambda.amazonaws.com"
       },
       "Action": "sts:AssumeRole"
     }
   ]
 }

When creating the following required access policy, I give the SNS-Cross-Account Lambda function the minimum rights required to save its logs to CloudWatch Logs and additional rights to the DynamoDB table. I save the following access policy document in a file named lambda_sns_access_policy.json.

{
     "Version": "2012-10-17",
     "Statement": [
         {
             "Action": [
                 "logs:CreateLogGroup",
                 "logs:CreateLogStream",
                 "logs:PutLogEvents"
             ],
             "Effect": "Allow",
             "Resource": "arn:aws:logs:*:*:*"
         },
         {
             "Sid": "PutUpdateDeleteOnCrossAccountAuditing",
             "Effect": "Allow",
             "Action": [
                 "dynamodb:PutItem",
                 "dynamodb:UpdateItem",
                 "dynamodb:DeleteItem"
             ],
             "Resource": "arn:aws:dynamodb:us-east-1:999999999999:table/CrossAccountAuditing"
         }
     ]
}

I then create an access policy and attach it to the role in the prod account.

aws iam create-policy --profile prod 
     --policy-name LambdaSNSExecutionRolePolicy 
     --policy-document file://lambda_sns_access_policy.json 

aws iam attach-role-policy --profile prod 
     --role-name LambdaSNSExecutionRole 
     --policy-arn
arn:aws:iam::999999999999:policy/LambdaSNSExecutionRolePolicy

Finally, I can create the SNS-Cross-Account Lambda function in the prod account.

aws lambda create-function --profile prod 
     --function-name SNS-Cross-Account 
     --runtime python2.7 
     --role arn:aws:iam::999999999999:role/LambdaSNSExecutionRole 
     --handler lambda_function.lambda_handler 
     --description "SNS Cross Account Function" 
     --timeout 60 
     --memory-size 128 
     --zip-file fileb://LambdaWithSNS.zip

Step 6: Subscribe the Lambda function cross-account to SNS

I now need to subscribe the SNS-Cross-Account Lambda function that I just created to the SNS topic. This is usually a straightforward process. However, it is a bit more complicated than usual in this use case because the SNS topic and the SNS-Cross-Account Lambda function are in two different AWS accounts.

First, I need to add permission to the SNS topic in the dev account to allow access from the prod account.

aws sns add-permission --profile dev 
     --topic-arn arn:aws:sns:us-east-1:111111111111:CrossAccountSNS 
     --label lambda-access 
     --aws-account-id 999999999999 
     --action-name Subscribe ListSubscriptionsByTopic Receive

Next, I add a permission to the Lambda function in the prod account to allow the SNS topic in the dev account to invoke the function.

aws lambda add-permission --profile prod 
     --function-name SNS-Cross-Account 
     --statement-id SNS-Cross-Account 
     --action "lambda:InvokeFunction" 
     --principal sns.amazonaws.com 
     --source-arn arn:aws:sns:us-east-1:111111111111:CrossAccountSNS

Finally, I subscribe the SNS-Cross-Account Lambda function to the SNS topic.

aws sns subscribe --profile prod 
     --topic-arn arn:aws:sns:us-east-1:111111111111:CrossAccountSNS  
     --protocol lambda 
     --notification-endpoint arn:aws:lambda:us-east-1:999999999999:function:SNS-Cross-Account

Step 7: Create the Lambda-S3-CloudTrail function

CloudTrail Logs are saved to S3. Saving the logs to S3 will trigger an S3 event.

I will create a second Lambda function called S3-Cross-Account. The S3 event will execute the S3-Cross-Account Lambda function.

The S3-Cross-Account function will parse the CloudTrail records that were saved to S3 and look for any relevant entries for the assumed role. Those relevant resource access–related entries will be stored in DynamoDB. These entries will be “linked” to the AssumeRole entries stored by the SNS-Cross-Account Lambda function.

The code of the S3-Cross-Account Lambda function follows. Edit the placeholder value in the CROSS_ACCOUNT_ROLE_ARN parameter, and save this code as lambda_function.py. Then zip that file into a file called LambdaWithS3.zip.

import json 
import urllib 
import boto3 
import zlib 
import logging 

DYNAMODB_TABLE_NAME = "CrossAccountAuditing" 
CROSS_ACCOUNT_ROLE_ARN = 'arn:aws:sts::999999999999:assumed-role/CrossAccountTest' 

logger = logging.getLogger() 
logger.setLevel(logging.INFO) 

S3 = boto3.resource('s3') 
DYNAMO = boto3.resource("dynamodb") 
TABLE = DYNAMO.Table(DYNAMODB_TABLE_NAME) 

logger.info('Loading function')

def parse_s3_cloudtrail(bucket_name, key):
     """
     Parse the CloudTrail record and stores it in DyanmoDB
     :param bucket_name:
     :param key:
     :return:
     """
     s3_object = S3.Object(bucket_name, key)
     # Download the file contents from S3 to memory
     payload_gz = s3_object.get()['Body'].read()
     # The CloudTrail record is gzipped by default so it must be decompressed
     payload = zlib.decompress(payload_gz, 16 + zlib.MAX_WBITS)
     # Convert the text JSON into a Python dictionary
     payload_json = json.loads(payload)
     # Loop through the records in the CloudTrail file
     for record in payload_json['Records']:
         # If a record matching the role is found, save it to DynamoDB
         if CROSS_ACCOUNT_ROLE_ARN in record['userIdentity']['arn']:
             logger.info("Access Key ID: {}".format(record['userIdentity']['accessKeyId']))
             logger.info("Record: {}".format(record))
             save_record(record) 

def save_record(record):
     """
     Save record to DynamoDB

     :param record:
     :return:
     """
     logger.info("Saving record to DynamoDB...")
     TABLE.put_item(
        Item={
             'accessKeyId': record['userIdentity']['accessKeyId'],
             'eventTime': record['eventTime'],
             'eventID': record['eventID'],
             'record': json.dumps(record)
         }
     )
     logger.info("Saved record to DynamoDB") 

def lambda_handler(event, context):
     # Get the object from the S3 event
     for record in event['Records']:
         bucket = record['s3']['bucket']['name']
         key = urllib.unquote_plus(record['s3']['object']['key']).decode('utf8')
         parse_s3_cloudtrail(bucket, key)

Next, I need to create the execution role that Lambda will use when it runs. First, I create the role.

aws iam create-role --profile prod 
     --role-name LambdaS3ExecutionRole 
     --assume-role-policy-document file://lambda_trust_policy.json

Note that the trust policy is the same one used for the SNS-Cross-Account Lambda function.

When creating the following required access policy, I give the S3-Cross-Account Lambda function the minimum rights required to save its logs to CloudWatch Logs and additional rights to the DynamoDB table and S3. For S3, I also give minimal rights to allow access to the S3 bucket that holds the CloudTrail records (highlighted in blue in the following code). Your bucket name will be different and you will need to edit the following document. I put the following access policy document in a file named lambda_s3_access_policy.json.

{
     "Version": "2012-10-17",
     "Statement": [
         {
             "Action": [
                 "logs:CreateLogGroup",
                 "logs:CreateLogStream",
                 "logs:PutLogEvents"
             ],
             "Effect": "Allow",
             "Resource": "arn:aws:logs:*:*:*"
         },
         {
           "Effect": "Allow",
           "Action": [
             "s3:GetObject"
           ],
           "Resource": "arn:aws:s3:::cloudtrailbucket/*"
         },
         {
             "Sid": "PutUpdateDeleteOnCrossAccountAuditing",
             "Effect": "Allow",
             "Action": [
                 "dynamodb:PutItem",
                 "dynamodb:UpdateItem",
                 "dynamodb:DeleteItem"
             ],
             "Resource": "arn:aws:dynamodb:us-east-1:999999999999:table/CrossAccountAuditing"
         }
     ]
 }

I then create an access policy and attach it to the role.

aws iam create-policy --profile prod 
     --policy-name LambdaS3ExecutionRolePolicy 
     --policy-document file://lambda_s3_access_policy.json
aws iam attach-role-policy --profile prod 
     --role-name LambdaS3ExecutionRole 
     --policy-arn arn:aws:iam::999999999999:policy/LambdaS3ExecutionRolePolicy

I next create the S3-Cross-Account function.

aws lambda create-function --profile prod 
     --function-name S3-Cross-Account 
     --runtime python2.7 
     --role arn:aws:iam::999999999999:role/LambdaS3ExecutionRole 
     --handler lambda_function.lambda_handler 
     --description "S3 X Account Function" 
     --timeout 60 
     --memory-size 128 
     --zip-file fileb://LambdaWithS3.zip

Finally, I add an S3 event to the S3 CloudTrail bucket that will trigger the S3-Cross-Account function when new CloudTrail records are put in the bucket. To do this, I first add a permission allowing S3 to invoke the S3-Cross-Account function. You will need to change the source-arn to the CloudTrail bucket for your account (highlighted in blue in the following code)

aws lambda add-permission --profile prod 
     --function-name S3-Cross-Account 
     --statement-id Id-1 
     --action "lambda:InvokeFunction" 
     --principal s3.amazonaws.com 
     --source-arn arn:aws:s3:::cloudtrailbucket 
     --source-account 999999999999

I put the following policy document in a file named notification.json.

{
   "CloudFunctionConfiguration": {
     "Id": "ObjectCreatedEvents",
     "Events": [ "s3:ObjectCreated:*" ],
     "CloudFunction": "arn:aws:lambda:us-east-1:999999999999:function:S3-Cross-Account"
   }
 }

In the configuration file, the CloudFunction is the ARN of the S3-Cross-Account function I just created.

Next, I add an S3 event that will be triggered when an object is added to the bucket. You will need to change the CloudTrail bucket for your account (highlighted in blue in the following code)

aws s3api put-bucket-notification --profile prod 
     --bucket cloudtrailbucket 
     --notification-configuration file://notification.json

Step 8: Test everything

To test that everything is working properly, the first step is to assume the role.

aws sts assume-role --profile dev 
     --role-arn "arn:aws:iam::999999999999:role/CrossAccountTest" 
     --role-session-name "CrossAccountTest" 
     --external-id 3414

The preceding command will output the temporary credentials that allow access to the Production account. Here is a sample of the output.

{
     "AssumedRoleUser": {
         "AssumedRoleId": "AROAJZRFWJRM4FHMGQ74K:CrossAccountTest",
         "Arn": "arn:aws:sts::999999999999:assumed-role/CrossAccountTest/CrossAccountTest"
     },
     "Credentials": {
         "SecretAccessKey": "EXAMPLEWEl02dd2kXW6d/Z7F/voe0TNR/G2gD/fB",
         "SessionToken": "EXAMPLE//////////wEaDO441gL1nAGt9M3XjyLQAR",
         "Expiration": "2016-04-08T18:08:46Z",
         "AccessKeyId": "ASIAJPVGV2XRFLHSWW7Q"
     }
 }

Using the credentials generated by the assumed-role, you can log in to the Production account and work with the S3 resources there, such as ListBuckets. You should start to see entries appear in the DynamoDB table, as shown in the following screenshot using the DynamoDB console.

Image of entries in the DynamoDB table

Note how the accessKeyId is the same for both eventIDs. Opening the first eventID, I can see that this record is for the AssumeRole API call and that user jsmith is making the call.

Image of the record for the AssumeRole API call

When I open the second eventID, I see that jsmith has used the cross-account role to list the buckets in the Production account. Note how the accessKeyId is the same in both screenshots, indicating the link between the AssumeRole and the ListBuckets.

Image showing user jsmith used the cross-account role to list the buckets in the Production account

Conclusion

In this post, I have shown you how to develop a method to provide end-to-end auditing of cross-account roles. With this method in place, you now have a full audit trail when a user accesses your AWS resources via a cross-account role. To develop this workflow, I relied heavily on Lambda, CloudTrail, and CloudWatch Events. The workflow also depended on exchanging data across accounts using SNS. To learn more see:

If you have comments about this blog post, submit them in the “Comments” section below. If you have questions, please start a new thread on the CloudTrail forum.

– Michael