AWS Cloud Operations Blog

Create event-driven workflow with AWS Resource Groups lifecycle events

AWS Resource Groups recently announced a new feature that pushes group lifecycle changes to Amazon EventBridge. A resource group is a collection of AWS resources, in the same AWS Region, that are grouped either using a tag-based query, or AWS CloudFormation stack-based query, and group lifecycle events make it easier for AWS customers to receive notifications and react quickly and efficiently to changes in the resource groups.

Prior to this release, customers needed to develop internal tools and scripts to poll the Resource Groups API millions of times daily to identify state changes to the customer’s resource groups and act in response to those changes. For example, when the team’s polling infrastructure detects a new Amazon EC2 instance is added to a resource group, the team runs scripts that configure servers, patch instances, or install software agents. This needs to poll the resource groups using custom-built tooling, which increases the operational cost and complexity for you as a customer.

To address such pain points, we released group lifecycle events for Resource Groups, which pushes events to EventBridge as changes happen. With EventBridge, customers can easily set up rules to automate operational tasks based on events. The new functionality automatically emits events when lifecycle changes occur to the resource groups themselves, or when member resources are added, or removed from the resource groups. This feature enables customers to build workflows to act on the changes, letting engineering teams achieve operational gains without having to spend additional effort just to identify state changes. For more information about Resource Groups events, see the release announcement and Monitoring resource groups for changes in the Resource Groups User Guide.

In this blog post, I show you how to benefit from Resource Groups lifecycle events with two different use cases.

  1. Automated tagging of resources within a resource group
  2. Auto-subscribe newly created Amazon CloudWatch log groups to a centralized log stream

Prerequisite

Before digging into any specific use case, you must first enable Resource Groups lifecycle events in the AWS Region and account in which you are going to host your resource groups.

You can turn on group lifecycle events by using either the AWS Management Console or by using a command from the AWS CLI or one of the AWS SDK APIs.

To turn on group lifecycle events via the console:

  1. Open the Settings page in the Resource Groups & Tag Editor console.
  2. In the Group lifecycle events section, choose the switch next to Notifications are turned off.

Also if you want to run the steps related to the use cases below, clone the repository on this GitHub repository to your local machine. The Repository contains the CloudFormation templates and the code for Lambda functions. Please note that there can be some small charges for the resources you will deploy here.

Use case 1: Automated tagging of resource groups resources

Consistently applying tags to resources delivers organizational benefits such as accurate cost allocation, granular access controls, precisely routed operational issues, and simplified resource operating state changes. However, without automation, proper tagging can be a challenge, either because development teams ignore adding tags or the applied tags are not standardized. To remove such overhead from the team, while making sure that resources are tagged based on the organizational best practices, we suggest an automated tagging mechanism that uses Resource Groups lifecycle events. Please note that it is possible to use AWS CloudTrail events for such use cases as described in this blog post. However, using Resource Groups events it is possible to add or remove tags on a group of resources at the same time.

You can use the auto-tagging solution described in this section to apply your organization’s defined tags to newly created resources that join a resource group. In the production environments, you can use a service such as AWS Systems Manager Parameter Store to store information about your organization-specific tags, where a Lambda function can query the tags based on saved configuration and add them to your resources. However, in this post, we use static tags from within the Lambda function code for simplification.

Figure 1 shows this use case architecture and its five-step workflow.

Architecture diagram showing the 5 steps workflow of the use case that are described in the workflow section.

Figure 1. Auto-tagging workflow and architecture diagram

Workflow steps

  1. Deploying a new CloudFormation nested stack that will create AWS resources including EC2 instances, and a new resource group.
  2. The resource group created by the stack, queries the EC2 instances created by the CloudFormation stack as defined by its query rule and adds them as members to the resource group.
  3. Resource Groups lifecycle events emits a “Group Membership Change” event to EventBridge for newly added EC2 instance.
  4. EventBridge uses a rule to detect the event, and invokes a Lambda function.
  5. The Lambda function applies the tags to newly created EC2.

Solution setup

Follow these steps to set up the auto-tagging solution. These steps assume that you have already enabled the lifecycle events feature in Resource Groups in the current AWS account and Region.

Step 1: Deploy CloudFormation stack for Lambda function and IAM roles

This CloudFormation stack will create a Lambda function called “resource-auto-tagger”, and an IAM role that has the required permissions, and can be assumed by Lambda. It also attaches the IAM role to the Lambda function.

To deploy the CloudFormation stack for this section follow the below steps:

  1. Navigate to the git repository you have cloned above, and create a zip file containing “resource-auto-tagger.py” to be used as Lambda code.
  2. Create an S3 bucket, and upload the zip file into the bucket.
  3. Open the CloudFormation console and click on Create Stack, and choose With new resources (standard).
  4. Select Template is ready, under Specify template, select Upload a template file, and Choose file, upload auto_tagger_lambda.yaml located in root folder, and click Next.
  5. Select a stack name, add the name of the zip file (“resource-auto-tagger.zip”) you have uploaded into your S3 bucket into LambdaFileName field, and add the name of S3 bucket you just created to S3BucketName. Leave the rest as it is and select Next. Add tags if you want, and click Next.
  6. Select the box “I acknowledge that AWS CloudFormation might create IAM resources.”, and select Submit.

Step 2: Create a rule in EventBridge

To create an EventBridge rule which is triggered by the Resource Groups events, do the following:

  1. Open the EventBridge console, and choose Create rule.
  2. Enter a name and description for your rule. Make sure you have chosen the “default” event bus. Select Rule with an event pattern and click Next.
  3. Choose AWS events or EventBridge partner events, as the event source.
  4. Scroll down to Event pattern. Under AWS service, choose Resource Groups, and then under Event type, choose All Events (See figure 2 below).
Screenshot of how to configure EventBridge rule in the console, described in the steps one to four.

Figure 2. setting up the rule in EventBridge console

  1. Click on Edit pattern, copy the following JSON data and paste it into the Event pattern box to replace the existing one. Choose Next.
{
  "source": ["aws.resource-groups"],
  "detail-type": ["ResourceGroups Group Membership Change"],
  "detail": {
    "resources": {
      "membership-change": ["add"]
    }
  }
}
  1. Select Lambda function as the target, and choose the resource-auto-tagger function from the dropdown menu. Click Next, and go to the review page, and click on Create rule.

Step 3: Verify the auto-tagging functionality

Now it’s time to verify the auto-tagging functionality by deploying the following CloudFormation nested stack. This stack creates resources for you to host a simple EC2 instance, and creates a new resource group with a query that includes all of the EC2 instances belonging to the initial stack as group members.

  1. Navigate to the repository you have cloned before, and go to Scenario1Step3 folder. Upload all the existing YAML files to the S3 bucket you have created above.
  2. Open the CloudFormation console and click on Create Stack, and choose With new resources (standard).
  3. Select Template is ready, under Specify template, select Upload a template file, and Choose file, upload main.yaml from repository root folder, and click Next.
  4. Select a stack name, select any availability zone from the drop-down menu, and add the name of S3 bucket you have created above to the text field under the S3BucketName. Accept the default values and choose Next.
  5. Leave the rest as default, and choose Next.
  6. Select the box “I acknowledge that AWS CloudFormation might create IAM resources”, and “I acknowledge that AWS CloudFormation might require the following capability: CAPABILITY_AUTO_EXPAND” and select Submit.
  7. Wait until all stacks are showing the CREATE_COMPLETE status. You should see one main stack, and 4 nested stacks, in the CloudFormation console.
  8. Navigate to the Resource Groups & Tag Editor console, and check the newly created resource group and its members. See if the EC2 instance match with the one created via CloudFormation EC2 nested stack.
  9. Then navigate to EC2 console, select the newly created instance, and check the tags listed on the Tag tab. You should see two newly created tags with key/value of “Project”, “RG Lifecycle”, and “Resource_Group”,  “GLEstack”, which are added by Lambda invoked via the Resource Groups membership update lifecycle events.

Step 4: Clean up

After you have completed the steps, in order to keep charges to a minimum, delete resources you no longer need.

  1. Navigate to the CloudFormation console.
  2. On the Stacks page in the console, select one of the stacks you have created above.
  3. In the stack details pane, choose Delete to delete the stack, and then choose Delete stack to confirm.
  4. Repeat steps above to delete the other stack you have created.
  5. Go to S3 console, select the bucket you have created and delete all the files inside the bucket, and then delete the bucket itself.
  6. Navigate to EventBridge console and delete the rule you have created above.

Beyond the basics

Now that you know how to standardize tags for your resources, you can expand this use case for the following operational tasks:

  • Tags for cost allocation: you can apply Cost Allocation tags to your application and resources to improve your FinOps operations, and have more granular cost visibility and breakdown within your organization and multi-account structure.
  • Tags for automation: where you can filter resources during infrastructure automation activities; opt-in/out of automated tasks; perform backup, update, and delete actions based on tags.

Use case 2: Auto-subscribe newly created log group in CloudWatch Logs to a centralized log stream

The centralized logging helps organizations collect, analyze, and display CloudWatch Logs in a single dashboard. This use case shows how Resource Groups lifecycle events can be used to consolidates logs from newly build Lambda functions related to a single application, environment or domain.

AWS Solution for centralized logging, describe how to use CloudWatch subscription filters to send the logs from different log groups to a centralized logging service. However, adding newly created log groups to the solution is not addressed. The reason is that the CloudWatch subscription needs the name of log group as input for creating and applying the filter on the logs, which is not known in advance. This is especially challenging in case of Lambda functions, where a new CloudWatch log group is created automatically for each function with the same name upon its creation. Consider the scenario where you want to gather the logs only for a group of Lambda functions, or at certain stage of development cycle. All of this is possible using Resource Groups lifecycle events with high granularity and flexibility, as described in the following section.

Figure 3 shows this use case architecture and its workflow.

Architecture diagram showing the 8 steps workflow of the use case two that are described in the workflow section.

Workflow steps

  1. User creates a new Lambda function and adds the proper tag related to the resource group to it.
  2. A new CloudWatch log group will be created with the same name as the Lambda function.
  3. Tagging the function will add it to the resource group as a member.
  4. Resource Groups emits a GroupMembership update event to the EventBridge.
  5. EventBridge rule detects the event, and invokes the second Lambda function.
  6. Lambda function adds CloudWatch filter subscription towards CloudWatch destination.
  7. Destination will apply the filter, so Logs will be directed to Amazon Kinesis Data Streams.
  8. Lambda logs from the log groups will be sent to the Kinesis Data Stream.

Solution setup

Follow these steps to set up this use case (considering you have already enabled lifecycle event feature in Resource Groups in the current account and the AWS Region).

Step 1: Create a tag-based resource group

  1. Open the AWS Resource Groups & Tag Editor console.
  2. Click on Create Resource Groups, and select Tag based for Group type.
  3. Under the Grouping criteria, select AWS::Lambda::Function for Resource types.
  4. Under Tags, write “Environment” as the Tag key and “Development” as the tag value ( see Figure 4 below), and click Add.
  5. Under the Group details, add a name such as RG-Development, and click on Create group.
Screenshot of how to create a tag-based resource group via AWS console, described in the steps two to four.

Figure 4. creating a tag-based Resource Group

Step 2: Deploy CloudFormation stack for creating Kinesis Data Streams, and log destination subscription

Deploy the CloudFormation stack for this use case as mentioned bellow:

  1. Open the CloudFormation console and click on Create Stack, and choose With new resources (standard).
  2. Select Template is ready, under Specify template, select Upload a template file, and Choose file, and upload cloudwatch_logs.yaml located in main folder, and click Next.
  3. Add a stack name, and fill out text fields under the Parameters section. Then click Next, and another Next.
  4. Select the box “I acknowledge that AWS CloudFormation might create IAM resources”, and select Submit.

This CloudFormation will create the following resources for you:

  1. A Kinesis Data Streams as destination of all the logs. All the logs coming from the Lambda functions belonging to the resource group you have created above, will be sent to this destination.
  2. An IAM role and policy that will grant CloudWatch logs the permission to put data into your Kinesis Data Streams.
  3. A CloudWatch logs destination where the CloudWatch logs will be sent to.
  4. A lambda Function, and its associated IAM role, to create CloudWatch filter subscription

Kinesis Data Streams is provisioned to index log events to the centralized logging service such as OpenSearch which is out of the scope of this blog post.

Step 3: Create EventBridge rule

To create a new EventBridge rule which triggers above Lambda based on Resource Groups event, follow these steps:

  1. Open the EventBridge console, and select Create rule.
  2. Enter a name and description for your rule. Make sure you have chosen the “default” event bus. Select Rule with an event pattern and click Next.
  3. Under Event Source, choose AWS events or EventBridge partner events, as event source.
  4. Scroll down to Event pattern. Under AWS service, choose Resource Groups, and then under Event type, choose All Events.
  5. Choose Edit pattern, and copy the following JSON data and paste it into the Event pattern box to replace the existing pattern. Choose Next.
{
  "source": ["aws.resource-groups"],
  "detail-type": ["ResourceGroups Group Membership Change"],
  "detail": {
    "resources": {
      "membership-change": ["add"]
    }
  }
}
  1.  Finally, select Lambda function as the target, and choose the function you have created via CloudFormation stack deployment above, from the dropdown menu. Click Next, and go to the review page, and click on Create rule.

Step 4: Test it all together

  1. Create a new Lambda function. You can use Lambda blueprints and create a simple function such as Hello world! Select Create a new role with basic Lambda permissions under Execution role for the new function.
  2. Trigger this function once using the Test tab, to make sure the log group is created. Wait for a couple of minutes and check the Log groups in CloudWatch console to find the log group for this lambda.
  3. Access the Hello World! function from Lambda console, and manually add a tag with “Environment” as the key and “Development” as the value. To tag a Lambda function, select Configuration tab, and then select Tags. Please note that the tags are case-sensitive.
  4. Go back to the Resource Group console, and select the resource group you have created above. Look for your function under groups resources.
  5. After a few seconds, check the function’s log group in CloudWatch console. Navigate to the Subscription filters tab, and find a new subscription referring to the Kinesis Data Streams you have created before.

Step 5: Clean up

After you have completed the steps, in order to keep charges to a minimum, delete resources you no longer need.

  1. Navigate to EventBridge console and delete the rule you have created above.
  2. Navigate to the CloudFormation console.
  3. On the Stacks page in the CloudFormation console, select one of the stacks you have created above.
  4. In the stack details pane, choose Delete to delete the stack, and then choose Delete stack to confirm.
  5. Navigate to Lambda console, and delete the Hello World! function you have created.
  6. Go to the Resource Group console, and delete the tag-based Resource group you created.

Beyond the basics

This use case shows one example of applying some workflow externally to your resources and applications. You can use group life cycle events to add your internal or external workflows to your application and resources by following similar steps as above. Some examples could be:

  • When your organization is using the third-party inventory software for compliance reasons, cost visibility, or any other operational reasons, you can have a workflow to send information about your resources as soon as they are added or removed from your resource group.
  • Or you have some external workflows through SaaS vendors running vulnerability scanning, patching, etc., from outside AWS. Then you can use this service to make sure your instances or containers are always up to date based on your specific organization tooling.

Conclusion

In this post, I have shown you how to use lifecycle events for Resource Groups to create event-driven workflows for managing your operations and resources in your AWS accounts. You have seen how to benefit from the lifecycle events related to both CloudFormation stack-based and tag-based resource groups.

The two use cases above show how you can automate various activities related to different groups based on the membership changes, such as how to automatically add specific tags to newly created resources or when moving them from one resource group to another based on their function, or how to have centralized logging for your serverless resources such as Lambda functions upon their creation.

This feature can be useful in automating a variety of operational tasks with high granularity, reducing the overhead of managing infrastructures for your engineering teams. After you have familiarized yourself with this feature by going through the steps in this blog post, start exploring existing workflows within your organization and test if you can automate them using AWS Resource Groups lifecycle events.

About the author:

Mozhgan Mahloo

Mozhgan is an Enterprise Solution Architect at AWS based in Stockholm, Sweden. She works with large Manufacturing customers enabling them to reach their business objectives using AWS services. She has a Ph.D. degree in telecommunications, 8 Patents, 2 book chapters, more than 20 academic publications, and enjoys innovating on behalf of customers.