AWS Storage Blog

Modern data protection architecture on Amazon S3: Part 2

Update (12/11/2023): As of November 20, 2023, Amazon S3 supports enabling S3 Object Lock on existing buckets.


Keeping data secure and usable in unforeseen circumstances like accidental breaches, human error, and hacking is critical to business continuity and success. To effectively mitigate the impact of these events on business-critical assets, one of the recommended strategies is creating immutable, unchangeable copies of those assets and storing them in isolated, secondary accounts with restricted access. After a security incident, recovering your data is just as important to having it safe in the first place, allowing your business to continue smooth operations with minimal downtime.

In part 1 of this blog series, we discussed an effective strategy to store immutable copies of your business-critical assets in a different AWS account using Amazon S3. We also covered the mechanism on how to replicate the assets securely from source to the new target (destination) account.

In this second post, we address how to retrieve data from your secondary bucket following a security related event that might have compromised the source data. It’s a best practice to isolate the resources affected by such incidents for forensics. With such isolation in effect, the you would need to recover the applications and data into a new AWS account. Here, we will implement that solution to recover the existing protected objects to a new Amazon S3 bucket under a new recovery account. With this solution, you can minimize business downtime with a quick retrieval of your data secured in an another account.

Solution overview

This solution is based on utilizing Amazon S3 Batch Replication to replicate the existing objects (golden copies) from your source Object Lock enabled bucket to a new Amazon S3 bucket created in a new destination (recovery) account. This enables you to maintain the golden copies in your isolated account and configure your recovered application(s) to use the replicated data stored in the new S3 bucket.

For replicating the existing objects, you can use Amazon S3 Batch Replication to replicate the objects into your destination (target) account’s bucket.

The major components of the solution are as follows:

  • Amazon S3 Batch Replication provides you a way to replicate objects that existed before a replication configuration was in place, objects that have previously been replicated, and objects that have failed replication. This is done through the use of a Batch Operations job. This differs from live replication which continuously and automatically replicates new objects across Amazon S3 buckets.
  • Amazon EventBridge is a serverless event bus service that you use to monitor the S3 Batch Replication job status events.
  • Amazon Simple Notification Service (SNS) is a fully managed messaging service you use to be notified via email when the S3 Batch Replication completion event is matched in Amazon EventBridge.

Solution walkthrough

We walk through the following steps to implement this solution.

  1. Create a new Amazon S3 bucket in the source and destination AWS Accounts.
  2. Create and configure an AWS IAM role for Amazon S3 Batch Replication.
  3. Update source account AWS Key Management Service (KMS)
  4. Add a bucket policy to the destination Amazon S3 bucket.
  5. Create a new Amazon S3 Batch Replication rule.
  6. Setup Amazon EventBridge for notification of Amazon S3 Batch Replication job status.
  7. Create Amazon S3 Batch Replication job.

The architecture layout shows how batch replication helps recovery of objects from the primary(source) account into a new recovery bucket under a different AWS account.

Amazon S3 Batch Replication for recovery of existing objects to a new AWS account

Prerequisite: Contact AWS Support to obtain Object Lock token

As your source bucket is an Object Lock enabled bucket this requires an additional step to contact AWS Support to provide the Object Lock token in order for you to configure S3 Batch Replication rules.

Open a case with AWS Support. When support contacts you they will need the following information:

  • The source bucket name.
  • The IAM role used for replication.
  • The destination bucket name.

The following steps will walk you through creating a new bucket, new IAM role and the required permissions.

1a. Create a new Amazon S3 bucket in the destination account

In this section, you create a new Amazon S3 destination bucket with server-side encryption using AWS KMS keys (SSE-KMS) and Amazon S3 Bucket Keys for storing your replicated objects. You can use any existing AWS account, but this should be a different account from where the source Object Lock enabled bucket is located. In the event of a real incident, you should be creating this bucket in a new AWS account so as to adequately isolate it from the attack plane and have a clean slate from where you can relaunch your application to serve end-users.

1. In the destination account, create a new customer-managed AWS KMS key (Symmetric) in the same Region you plan to create your destination bucket.
2. In the destination account AWS KMS console, add the source AWS account number in the Other AWS Accounts section of the AWS KMS key you created in step 1.

This diagram shows how you can add the source account's ARN under the destination account's AWS KMS console.

3. In the destination account, create a new S3 bucket. This will be your S3 Batch Replication destination bucket.

a. Enter a name for your new bucket, for example, mys3-recovery-bucket.
b. Select the same region as your source Object Lock enabled bucket. Unless there is an explicit reason to replicate the data into a different AWS region, we would replicate to this bucket within the same region to avoid data transfer costs.
c. Select ACLs disabled (recommended) for Object Ownership.

i. Details on disabling ACLs can be found in the Amazon S3 User Guide.

This diagram shows the configuration for the setup of anew recovery bucket under the destination account.

d. Select Block all Public access.
e. Select Enable for Bucket Versioning.

The diagram shows that in yje destination's account recovery bucket, we need to select Block all Public access and Enable the option for Bucket Versioning.

f. Select Enable for Default Encryption.

i. Choose AWS Key Management Service key (SSE-KMS) for Encryption key type.
ii. For AWS KMS key, select Choose from your AWS KMS keys and select the KMS key you created.
iii. Select Enable for Bucket Key.

The diagram shows how we can enable the AWS-KMS encryption options along with Enabling the Bucket key under the recovery bucket setup.

g. Under Advanced Settings choose Enable for Object Lock.

i. When using S3 Batch Replication and the source bucket is Object Lock enabled this requires Object Lock to be enabled on the destination bucket. See the Amazon S3 documentation for details about replication with Object Lock and what is replicated.

This diagram shows how under Advanced Settings option we can enable the Object Lock.

1b. Create a new Amazon S3 bucket in the source account

In this section you create a new Amazon S3 bucket to store the manifest and reports for Amazon S3 Batch Replication.

  1. In the source account, create a new S3 bucket. This will be your S3 Batch Replication reports bucket.

a. Enter a name for your new bucket, for example, mys3-batch-report-bucket.
b. Select the same region as your source Object Lock enabled bucket
c. For Object Ownership select ACLs disabled (recommended) (Details on disabling ACLs can be found in the Amazon S3 User Guide.)

The diagram shows the creation of a new bucket for storing the manifest and reports for Batch replication under the source AWS account.

d. Select Block all Public access.
e. Select Disable for Bucket Versioning.

This diagram shows how to set up the bucket to store manifest file and reports - disabling the public access and disabling the bucket versioning options.

f. In Default encryption, select Enable for Server-side encryption.

i. Choose Amazon S3-managed key (SSE-S3).

This diagram shows that in this bucket (which stores the manifest file and reports) , we can set up the S3 managed SSS-S3 encryption option.

2. Create an AWS IAM Policy for Amazon S3 Batch Replication

In this section you create a new AWS IAM policy to be used for Amazon S3 Batch Replication role.

  1. In the source bucket account open the Identity and Access Management (IAM) Console.
  2. Select Policies.
  3. Select Create policy.
  4. Select the JSON tab and enter the following policy, making the appropriate changes in the bolded areas.
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "1",
            "Effect": "Allow",
            "Action": "kms:Decrypt",
            "Resource": "<your source bucket KMS Key ARN>",
            "Condition": {
                "StringLike": {
                    "kms:EncryptionContext:aws:s3:arn: 
        "arn:aws:s3:::<your source bucket>",                             
        "kms:ViaService":"s3.<region>.amazonaws.com"
                }
            }
        },
        {
            "Sid": "2",
            "Effect": "Allow",
            "Action": "kms:Encrypt",
            "Resource": "<your destination bucket KMS Key ARN>",
            "Condition": {
                "StringLike": {
                    "kms:EncryptionContext:aws:s3:arn": "arn:aws:s3:::<your destination bucket>",                             ",
                    "kms:ViaService": "s3.<region>.amazonaws.com"
                }
            }
        },
        {
            "Sid": "3",
            "Effect": "Allow",
            "Action": [
                "s3:ListBucket",
                "s3:GetReplicationConfiguration",
                "s3:PutInventoryConfiguration"
            ],
            "Resource": "arn:aws:s3:::<your source bucket name>"
        },
        {
            "Sid": "4",
            "Effect": "Allow",
            "Action": [
                "s3:GetObjectRetention",
                "s3:GetObjectVersionAcl",
                "s3:GetObjectTagging",
                "s3:GetObjectVersionForReplication",
                "s3:GetObjectLegalHold",
                "s3:InitiateReplication"
            ],
            "Resource": [
                "arn:aws:s3:::<your source bucket name>",
                "arn:aws:s3:::<your source bucket name>/*"
            ]
        },
        {
            "Sid": "5",
            "Effect": "Allow",
            "Action": [
                "s3:ReplicateObject",
                "s3:ReplicateDelete"
            ],
            "Resource": " arn:aws:s3:::<your destination bucket name>/*"
        },
        {
            "Sid": "6",
            "Effect": "Allow",
            "Action": [
                "s3:GetObject",
                "s3:GetObjectVersion"
            ],
            "Resource": "arn:aws:s3:::<your report bucket name>/*"
        },
        {
            "Sid": "7",
            "Effect": "Allow",
            "Action": "s3:PutObject",
            "Resource": " arn:aws:s3:::<your report bucket name>/*"
        }
    ]
}
  1. Select Next until you reach the Review Policy
  2. Enter a name for your policy, for example s3-batch-replication-role-policy.
  3. Select Create Policy.

3. Create and configure an AWS IAM Role for Amazon S3 Batch Replication

In this section you create a new AWS IAM role to be used for Amazon S3 Batch Replication.

  1. Go to the AWS IAM console.
  2. Select Roles.
  3. Select Create roles.

a. In the Use Case section, select S3 for Use cases for other AWS Services, and choose S3 Batch Operations.

The diagram shows selecting the S3 Batch Operations for the trusted entity type setup.

  1. Select Next.

a. Search for the new policy you just created and select the box next to that policy.

5. Select Next.

a. Enter a Role name, for example, s3-batch-replication-role.
b. Select Create Role.

6. Edit the Trust relationships.

a. Select the Trust relationships tab.
b. Select Edit trust policy and replace with the following:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "Service": [
                    "batchoperations.s3.amazonaws.com",
                    "s3.amazonaws.com"
                ]
            },
            "Action": "sts:AssumeRole"
        }
    ]
}

4. Update source account AWS Key Management Service (KMS) permissions

Update the permissions on the existing AWS KMS Key used for the source bucket encryption.

  1. In the source account, open the AWS KMS Management Console and locate the key used for you bucket encryption and select that key.
  2. In the Key Users section add the new role you just created.
  3. In the Other AWS Accounts, add the AWS account number where you created the recovery bucket.

5. Add a bucket policy to the destination Amazon S3 bucket

Add a bucket policy on the destination bucket for Amazon S3 Batch Replication.

  1. In the destination account, open the Amazon S3 Management console and locate your new bucket you created earlier and click on the bucket name.
  2. Go to the Permissions tab of the bucket.
  3. In the Bucket Policy section add the following bucket policy with the appropriate changes for the bolded areas.
{
    "Version": "2012-10-17",
    "Id": "PolicyForDestinationBucket",
    "Statement": [
        {
            "Sid": "Permissions on objects",
            "Effect": "Allow",
            "Principal": {
                "AWS": "<IAM role ARN for replication>"
            },
            "Action": [
                "s3:ReplicateDelete",
                "s3:ReplicateObject"
            ],
            "Resource": "arn:aws:s3:::<recovery bucket name>/*"
        },
        {
            "Sid": "Permissions on bucket",
            "Effect": "Allow",
            "Principal": {
                "AWS": "<IAM role ARN for replication>"
            },
            "Action": [
                "s3:List*",
                "s3:GetBucketVersioning",
                "s3:PutBucketVersioning"
            ],
            "Resource": "arn:aws:s3:::<recovery bucket name>"
        }
    ]
}

6. Create new Amazon S3 Batch Replication rule

At this point you should have received the Object Lock token from AWS Support to enable S3 Batch Replication. In this section you use the AWS CLI to create the new replication rule. You need to use the AWS CLI to create the replication rule with the Object Lock token received from AWS Support. The following steps are performed in the source account which contains the S3 bucket with your existing objects.

  1. Open the AWS CLI from your computer. You will need to use a profile with the proper access to create the S3 Batch Replication rule in the source account.
  2. Open a text file editor and enter in the following replication rule configuration with the appropriate changes for the bolded areas.
{
        "Role": "<IAM role for S3 Batch Replication>",
        "Rules": [
            {
                "ID": "blog-batch-replication",
                "Priority": 0,
                "Filter": {},
                "Status": "Enabled",
                "SourceSelectionCriteria": {
                    "SseKmsEncryptedObjects": {
                        "Status": "Enabled"
                    }
                },
                "Destination": {
                    "Bucket": "arn:aws:s3:::<destination bucket name>",
                    "Account": "<destination AWS Account number>",
                    "StorageClass": "INTELLIGENT_TIERING",
                    "AccessControlTranslation": {
                        "Owner": "Destination"
                    },
                    "EncryptionConfiguration": {
                        "ReplicaKmsKeyID": "<KMS Key ARN for destination bucket>"
                    },
                    "Metrics": {
                        "Status": "Enabled"
                    }
                },
                "DeleteMarkerReplication": {
                    "Status": "Disabled"
                }
            }
        ]
    }

a. Save the file as replication-config.json

  1. In the AWS CLI prompt enter the following command to create the replication rule.
aws s3api put-bucket-replication --bucket <source bucket name> --replication-configuration file:// replication-config.json --token  <token you received from AWS Support>
  1. Open the S3 console and locate your source bucket with the existing objects and click on the bucket name.

a. Select the Management tab to verify you see the new replication rule created.

5. Now select the Properties tab of the S3 bucket and navigate to the Event notification section.

a. Under Amazon EventBridge, select Edit and select On

i. We will use event notifications later for monitoring of the S3 Batch Replication job status.

b. Select Save changes.

As the next step, you can set up EventBridge to receive real time notifications on the status of S3 Batch Replication jobs. For more examples, check out the documentation.

7. Create S3 Batch Replication job

In this section you create the S3 Batch Replication job to replicate your objects to the new S3 recovery bucket.

  1. In the source bucket account, open the Amazon S3 console and select Batch Operations.
  2. Select the Create job button and choose the following options:

a. Select Create manifest using S3 Replication configuration. This will automatically generate a manifest file based on the bucket contents and also allow you to filter objects on the object creation timestamps.
b. Select Bucket in this AWS account and choose your source bucket.
c. For Replication status, select All of the object in this bucket have been replicated from our original source and have a status of Replica.

This diagram shows how to select S3 managed keys (SSE-S3) encryption option for the source bucket.

d. In the Batch Operation manifest section, select Save Batch Operation manifest.
e. Select Bucket in this AWS account and choose the bucket you created earlier.
f. Under Encryption, select Enable and choose Amazon S3-managed keys (SSE-S3).

This diagram shows how to setup the Batch operations manifest. We enable the server side encryption option along with selection of SSe-S3 type encryption.

g. Select Next and choose Replicate.
h. Select Next and under Completion report, select Generate Completion report and choose All tasks.
i. Set the destination for the reports to the bucket you created earlier.
j. Under Permissions, select Choose from existing IAM roles and select the role you created earlier. Select Next.

This diagram shows the option for generating the Completion report with the appropriate path for the report destination bucket's ARN.

This diagram shows how to set up the correct permissions by using the newly created IAM role for s3 batch replication.

k. Select Create job. The job will begin preparing. Hit refresh and wait for a status of Awaiting your confirmation to run.
l. Select the job and choose Run job. This begins the replication process.

  1. You can now monitor the job status here and also you should receive email notifications alerting you to the job status.
  2. Once complete navigate to your destination bucket and verify all the objects have been replicated. Observe the Object Lock status and retention, these should be identical to the source bucket.

Cleaning up

If you followed along and want to avoid incurring unexpected charges, remember to delete the source and target buckets, and disable versioning on the source bucket. Also, delete any AWS Lambda functions, Amazon EventBridge rules, AWS IAM roles and policies and AWS KMS keys.

Summary

In this second post of this two-part blog series, we focused on using Amazon S3 Batch Replication to replicate your existing objects stored in Amazon S3 to a new AWS account and S3 bucket to help you begin the recovery of your workloads in the event of a ransomware or other malicious incident.

The architectural guidelines provided here will help you to implement a solution to effectively retrieve your assets in the event a ransomware or other malicious incident has compromised your S3 bucket(s). With this solution, you can quickly recover your data and minimize any potential business downtime.

If you have any feedback or comments, don’t hesitate to leave them in the comments section.

Saurav Bhattacharya

Saurav Bhattacharya

Saurav is a Senior Solutions architect with 16+ years of experience mostly within the telecom, broadband and media domain. Within AWS, he is focused on solving the challenges of the M&E customers and building solutions to accelerate digital transformation.

Michael Galvin

Michael Galvin

Michael is a Storage Specialist Solutions Architect at AWS with over 25 years in IT. Michael helps enterprise customers architect, build, and migrate workloads to AWS to help meet their technical and business needs. Outside of work he enjoys working on projects around the house and spending time with his wife and 3 boys.

Oleg Chugaev

Oleg Chugaev

Oleg Chugaev is a Principal Solutions Architect and Serverless evangelist with 20+ years in IT, holding 9 AWS certifications. At AWS, he drives customers through their cloud transformation journeys by converting complex challenges into actionable roadmaps for both technical and business audiences.