AWS Cloud Operations Blog

Automated just-in-time storage for SQL Server backup using AWS Systems Manager Automation

There are times when you need fairly large storage volumes for use cases that are infrequent but needed recurrently. For example, one AWS customer needed to have multiple terabytes of Amazon Elastic Block Store (Amazon EBS) volumes available for taking MSSQL full backups. The backup job was scheduled as a weekly task but the customer was paying for this storage capacity even when no backup jobs were running.

This situation is unavoidable when running workloads such as MSSQL on-premises. One of the benefits of the cloud is that you can provision resources on demand and pay only for what you consume. That’s why this AWS customer decided to provision EBS volumes and attach them to their SQL Server EC2 instances just in time for the backup jobs and then delete the volumes upon job completion. The customer can provision and attach EBS volumes by invoking simple API requests, but they use AWS Systems Manager Automation to turn the whole workflow into a self-handling and hands-off solution.

Solution

The solution uses Amazon CloudWatch events to trigger the workflow at regular intervals of once every week. The Systems Manager Automation runbook includes the following steps as shown in Figure 1 below.

1. Identify the target instances based on user defined tags. 2. Provision EBS volumes and attach them to target EC2 instances. 3. Find the volume ID values of the volumes created in previous step. 4. Enable and mount the volume in Windows. 5. Run backup job and move the backup file to S3. 6. Unmount the volume in Windows. 7. Find the volume ID values of the volumes. 8. Delete the EBS volume.

Figure 1: Automation workflow steps

Although we could have simply used volume ID values as output from step 2, adding a separate step to find volume ID values enables the workflow to handle situations when volumes might already be created from a previous incomplete run of the Automation runbook.

Prerequisites

Some of the prerequisites mentioned here are specific to AWS Systems Manager. Even if you are currently using AWS Systems Manager, we strongly encourage you to review them.

  1. Install the Systems Manager Agent on your SQL Server EC2 instances. The Systems Manager Agent is already installed by default on Amazon provided Amazon Machine Images.
  2. The Systems Manager Agent must be able to communicate with the Systems Manager APIs. You will need access to the API endpoint through an internet gateway, NAT gateway, proxy, or VPC endpoint.
  3. Create an AWS Identity and Access Management (IAM) instance profile with permissions to communicate with the Systems Manager API.
  4. Tag the SQL Server EC2 instances to mark them as targets for the Automation runbook. The runbook finds target SQL Server EC2 instances using EC2 tags. You can choose any tag key-value pairs, assuming they are exclusively applied to the target EC2 instances.
  5. Grant access to the Amazon Simple Storage Service (Amazon S3) bucket that is used to store the SQL Server backup file. The permission required are discussed in the next section.

Create the SSM Automation runbook

Each step in the Automation runbook is configured individually and executed sequentially. For a simpler implementation of the SQL Server backup automation, there is a CloudFormation template in AWS Samples GitHub repository that you can use to configure the Systems Manager Automation runbook and S3 buckets. Once the template is deployed successfully, it creates two S3 buckets: one for storing PowerShell scripts used in the Automation runbook and the other for storing SQL Server backup files.

To create the CloudFormation stack

  1. Save the template on your local computer or upload it to S3.
  2. Open the AWS CloudFormation console and create a CloudFormation stack.
  3. In Specify stack details, enter a name for your stack.
  4. In Parameters, enter the information shown in Figure 2, and then choose Next.

As mentioned earlier, there are two S3 buckets to be created. Use the Environment parameter to append the environment description to the bucket name. For example, if you specify sql-backup-auto-blog for the backup bucket and test for the Environment, the bucket will be named sql-backup-auto-blog-test.

 In Specify stack details, under Parameters, the BackupBucketName is sql-backup-auto-blog. Under Environment, test is entered. The ScriptBucketName is sql-backup-auto-blog-scripts.

Figure 2: Specify stack details

  1. Follow the instructions in Setting AWS CloudFormation stack options.
  2. On the Review page, select the checkbox, and then choose Create stack.

The stack creates the Automation runbook, S3 buckets, and the IAM role required to execute the Automation runbook. Configure the IAM role to grant access to the backup S3 bucket to copy the SQL Server backup. To do that, you need to add S3 permissions to the instance profile attached to all instances running SQL Server so the S3 bucket can be targeted by the automation solution. The IAM role attached to the instance must have read, write, list, and delete permissions on the S3 backup bucket. For information about how to write an IAM policy to grant access to an S3 bucket, see the Writing IAM Policies: How to Grant Access to an Amazon S3 Bucket blog post.

Here is a sample IAM policy:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "s3:GetBucketLocation",
        "s3:ListAllMyBuckets"
      ],
      "Resource": "*"
    },
    {
      "Effect": "Allow",
      "Action": ["s3:ListBucket"],
      "Resource": ["arn:aws:s3:::testBucket"]
    },
    {
      "Effect": "Allow",
      "Action": [
        "s3:PutObject",
        "s3:GetObject",
        "s3:DeleteObject"
      ],
      "Resource": ["arn:aws:s3:::testBucket/*"]
    }
  ]
}

Note: The sample policy is configured to grant access to testBucket for SQL Server backup. Be sure to replace this value with the name of your bucket.

The newly created Automation runbook is displayed in the AWS Systems Manager console on the Owned by me tab.

Figure 3: New SQL Backup Automation Runbook

Upload scripts to the S3 bucket

The Automation runbook relies on the following PowerShell scripts to perform certain tasks. Before you execute the Automation runbook, make sure you have uploaded these scripts to the S3 bucket created in Create an SSM Automation runbook.

Let’s look at the content of the scripts and the sequence in which they are executed. These are only the steps that require a script. All scripts are available in the AWS Samples GitHub repository.

  • Initialize Disk (InitializeDisk.ps1) This is step 3 in the Automation runbook. It’s executed after a temporary EBS volume is attached to the SQL Server EC2 instance. This step will initialize the new disk on the OS to store backup and assign Z as the drive. If the Z drive is already used by the OS, use a different drive letter before you execute the script.
  • Run Backup Job (BackupJob.ps1) This is step 4 in the Automation runbook. It’s executed after the new disk is initialized and assigned a drive letter. Change the value of the $backupDir parameter to match the drive letter in previous step. By default, this step will execute a SQL Server backup job and store the backup file under Z:\Backups.
  • Run S3 Copy (S3Copy.ps1) This is step 5 in the Automation runbook. It’s executed after the SQL Server backup is stored in the local drive. This step will copy the backup file from the local temporary drive and copy it to the S3 backup bucket.

Before you copy this script to the S3 scripts bucket, replace the placeholder in the script with the S3 backup bucket name and AWS Region you desire to use. If you are using server-side encryption for S3, include the AWS Key Management Service (AWS KMS) ARN parameter and the switch ‘-ServerSideEncryption’ in the Write-S3Object command. For information about how to find to the S3 ARN, see Amazon S3 resources in the Amazon S3 User Guide.

  • Unmount Disk (UnmountDisk.ps1) This is step 6 in the Automation runbook. It’s executed after the SQL Server backup is copied to S3. This step will take the volume offline in the OS before detaching it from the EC2 instance.

Execute the Automation workflow

The Systems Manager Automation workflow can be executed manually or on a scheduled interval. You can use the console or AWS CLI to execute the workflow manually. You can use CloudWatch Events to perform scheduled execution of the workflow. For more information, see Running automations with triggers using CloudWatch Event. You can also use AWS Systems Manager State Manager to run the automation workflow. A sample code snippet is provided in the CloudFormation template for your use.

In this blog post, we show you how to perform a simple manual execution of the Automation workflow. When you run the automation manually, the workflow runs in the context of the current IAM user so you don’t need to configure additional permissions as long as the user has permission to execute the Automation runbook and any actions called in the runbook.

  1. From the left navigation pane of the AWS Systems Manager console, choose Automation, and then choose Execute automation.
  2. Choose Owned by me and then select the Automation runbook. The runbook named should begin with the name you specified for the CloudFormation stack.
  3. Choose Execute automation and then choose Simple execution.

The target for the Automation runbook relies on EC2 tags. For this example, I’ve added a tag with a key of BackupAuto and a value of Yes to the SQL Server EC2 instance, as shown in Figure 4. When the automation is executed, it scans the EC2 instances and picks the ones that match the tag.

 

The Tags tab is selected and displays a tag with a key of BackupAuto and a value of Yes.

Figure 4: EC2 tags

In Input parameters, under TargetTagValue, Yes is entered. Under TargetTagName, BackupAuto is entered.

Figure 5: Input parameter

Click on Execute.

The Executed steps page displays step ID, number, name, action, and status.

Figure 6: Status of workflow

You can also check the Description tab for the EC2 instance to find out if a new volume is attached. Figure 7 shows a new block device, xvdz, has been added.

Description tab displays the instance ID, instance state (in this example, running), instance type (m5a.xlarge), and other information. The Block devices field displays /dev/sda1, xvdf, and xvdz.

Figure 7: EC2 instance description

Cleanup

To delete the Automation runbook, delete the CloudFormation stack. The SQL Server backup stored in S3 is not deleted automatically. You must delete the backup files stored in S3 manually.

Conclusion

In this blog post, we showed you how to use Systems Manager Automation to attach an EBS volume to an EC2 instance running SQL Server on the fly and use it as temporary storage for the SQL Server backup. The automation workflow copied the backup to S3 for a longer retention period and deleted the temporary EBS volume to reduce storage costs.

Please refer below links if you are interested in learning more about Systems Manager Automation and Document Builder.

Automation Walkthrough – https://docs.aws.amazon.com/systems-manager/latest/userguide/automation-walk.html

Document Builder – https://docs.aws.amazon.com/systems-manager/latest/userguide/automation-document-builder.html

About the authors

Garry Singh

Garry Singh is a Senior Solutions Architect with Amazon Web Services. He works with AWS customers to provide guidance and technical assistance to help them achieve the best outcome for Microsoft workloads on AWS.

Sepehr Samiei

Sepehr Samiei is currently a Principal Solutions Architect at AWS. He started his professional career as a .Net developer, which continued for more than 10 years. Early on, he quickly became a fan of cloud computing and loves to help customers utilise the power of Microsoft tech on AWS. His wife and children are the most precious parts of his life.

Sukesh Kesavankutty

Sukesh Kesavankutty is a Senior Technical Account Manager with Amazon Web Services based in Wellington, New Zealand. He enjoys working with customers to improve the operational efficiency of their workloads on AWS using DevOps best practices.