AWS Compute Blog
Capturing Custom, High-Resolution Metrics from Containers Using AWS Step Functions and AWS Lambda
Contributed by Trevor Sullivan, AWS Solutions Architect
When you deploy containers with Amazon ECS, are you gathering all of the key metrics so that you can correctly monitor the overall health of your ECS cluster?
By default, ECS writes metrics to Amazon CloudWatch in 5-minute increments. For complex or large services, this may not be sufficient to make scaling decisions quickly. You may want to respond immediately to changes in workload or to identify application performance problems. Last July, CloudWatch announced support for high-resolution metrics, up to a per-second basis.
These high-resolution metrics can be used to give you a clearer picture of the load and performance for your applications, containers, clusters, and hosts. In this post, I discuss how you can use AWS Step Functions, along with AWS Lambda, to cost effectively record high-resolution metrics into CloudWatch. You implement this solution using a serverless architecture, which keeps your costs low and makes it easier to troubleshoot the solution.
To show how this works, you retrieve some useful metric data from an ECS cluster running in the same AWS account and region (Oregon, us-west-2) as the Step Functions state machine and Lambda function. However, you can use this architecture to retrieve any custom application metrics from any resource in any AWS account and region.
Why Step Functions?
Step Functions enables you to orchestrate multi-step tasks in the AWS Cloud that run for any period of time, up to a year. Effectively, you’re building a blueprint for an end-to-end process. After it’s built, you can execute the process as many times as you want.
For this architecture, you gather metrics from an ECS cluster, every five seconds, and then write the metric data to CloudWatch. After your ECS cluster metrics are stored in CloudWatch, you can create CloudWatch alarms to notify you. An alarm can also trigger an automated remediation activity such as scaling ECS services, when a metric exceeds a threshold defined by you.
When you build a Step Functions state machine, you define the different states inside it as JSON objects. The bulk of the work in Step Functions is handled by the common task state, which invokes Lambda functions or Step Functions activities. There is also a built-in library of other useful states that allow you to control the execution flow of your program.
One of the most useful state types in Step Functions is the parallel state. Each parallel state in your state machine can have one or more branches, each of which is executed in parallel. Another useful state type is the wait state, which waits for a period of time before moving to the next state.
In this walkthrough, you combine these three states (parallel, wait, and task) to create a state machine that triggers a Lambda function, which then gathers metrics from your ECS cluster.
Step Functions pricing
This state machine is executed every minute, resulting in 60 executions per hour, and 1,440 executions per day. Step Functions is billed per state transition, including the Start and End state transitions, and giving you approximately 37,440 state transitions per day. To reach this number, I’m using this estimated math:
26 state transitions per-execution x 60 minutes x 24 hours
Based on current pricing, at $0.000025 per state transition, the daily cost of this metric gathering state machine would be $0.936.
Step Functions offers an indefinite 4,000 free state transitions every month. This benefit is available to all customers, not just customers who are still under the 12-month AWS Free Tier. For more information and cost example scenarios, see Step Functions pricing.
Why Lambda?
The goal is to capture metrics from an ECS cluster, and write the metric data to CloudWatch. This is a straightforward, short-running process that makes Lambda the perfect place to run your code. Lambda is one of the key services that makes up “Serverless” application architectures. It enables you to consume compute capacity only when your code is actually executing.
The process of gathering metric data from ECS and writing it to CloudWatch takes a short period of time. In fact, my average Lambda function execution time, while developing this post, is only about 250 milliseconds on average. For every five-second interval that occurs, I’m only using 1/20th of the compute time that I’d otherwise be paying for.
Lambda pricing
For billing purposes, Lambda execution time is rounded up to the nearest 100-ms interval. In general, based on the metrics that I observed during development, a 250-ms runtime would be billed at 300 ms. Here, I calculate the cost of this Lambda function executing on a daily basis.
Assuming 31 days in each month, there would be 535,680 five-second intervals (31 days x 24 hours x 60 minutes x 12 five-second intervals = 535,680). The Lambda function is invoked every five-second interval, by the Step Functions state machine, and runs for a 300-ms period. At current Lambda pricing, for a 128-MB function, you would be paying approximately the following:
Total compute
Total executions = 535,680
Total compute = total executions x (3 x $0.000000208 per 100 ms) = $0.334 per dayTotal requests
Total requests = (535,680 / 1000000) * $0.20 per million requests = $0.11 per day
Total Lambda Cost
$0.11 requests + $0.334 compute time = $0.444 per day
Similar to Step Functions, Lambda offers an indefinite free tier. For more information, see Lambda Pricing.
Walkthrough
In the following sections, I step through the process of configuring the solution just discussed. If you follow along, at a high level, you will:
- Configure an IAM role and policy
- Create a Step Functions state machine to control metric gathering execution
- Create a metric-gathering Lambda function
- Configure a CloudWatch Events rule to trigger the state machine
- Validate the solution
Prerequisites
You should already have an AWS account with a running ECS cluster. If you don’t have one running, you can easily deploy a Docker container on an ECS cluster using the AWS Management Console. In the example produced for this post, I use an ECS cluster running Windows Server (currently in beta), but either a Linux or Windows Server cluster works.
Create an IAM role and policy
First, create an IAM role and policy that enables Step Functions, Lambda, and CloudWatch to communicate with each other.
- The CloudWatch Events rule needs permissions to trigger the Step Functions state machine.
- The Step Functions state machine needs permissions to trigger the Lambda function.
- The Lambda function needs permissions to query ECS and then write to CloudWatch Logs and metrics.
When you create the state machine, Lambda function, and CloudWatch Events rule, you assign this role to each of those resources. Upon execution, each of these resources assumes the specified role and executes using the role’s permissions.
- Open the IAM console.
- Choose Roles, create New Role.
- For Role Name, enter
WriteMetricFromStepFunction
. - Choose Save.
Create the IAM role trust relationship
The trust relationship (also known as the assume role policy document) for your IAM role looks like the following JSON document. As you can see from the document, your IAM role needs to trust the Lambda, CloudWatch Events, and Step Functions services. By configuring your role to trust these services, they can assume this role and inherit the role permissions.
- Open the IAM console.
- Choose Roles and select the IAM role previously created.
- Choose Trust Relationships, Edit Trust Relationships.
- Enter the following trust policy text and choose Save.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Action": "sts:AssumeRole"
},
{
"Effect": "Allow",
"Principal": {
"Service": "events.amazonaws.com"
},
"Action": "sts:AssumeRole"
},
{
"Effect": "Allow",
"Principal": {
"Service": "states.us-west-2.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
Create an IAM policy
After you’ve finished configuring your role’s trust relationship, grant the role access to the other AWS resources that make up the solution.
The IAM policy is what gives your IAM role permissions to access various resources. You must whitelist explicitly the specific resources to which your role has access, because the default IAM behavior is to deny access to any AWS resources.
I’ve tried to keep this policy document as generic as possible, without allowing permissions to be too open. If the name of your ECS cluster is different than the one in the example policy below, make sure that you update the policy document before attaching it to your IAM role. You can attach this policy as an inline policy, instead of creating the policy separately first. However, either approach is valid.
- Open the IAM console.
- Select the IAM role, and choose Permissions.
- Choose Add in-line policy.
- Choose Custom Policy and then enter the following policy. The inline policy name does not matter.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [ "logs:*" ],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [ "cloudwatch:PutMetricData" ],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [ "states:StartExecution" ],
"Resource": [
"arn:aws:states:*:*:stateMachine:WriteMetricFromStepFunction"
]
},
{
"Effect": "Allow",
"Action": [ "lambda:InvokeFunction" ],
"Resource": "arn:aws:lambda:*:*:function:WriteMetricFromStepFunction"
},
{
"Effect": "Allow",
"Action": [ "ecs:Describe*" ],
"Resource": "arn:aws:ecs:*:*:cluster/ECSEsgaroth"
}
]
}
Create a Step Functions state machine
In this section, you create a Step Functions state machine that invokes the metric-gathering Lambda function every five (5) seconds, for a one-minute period. If you divide a minute (60) seconds into equal parts of five-second intervals, you get 12. Based on this math, you create 12 branches, in a single parallel state, in the state machine. Each branch triggers the metric-gathering Lambda function at a different five-second marker, throughout the one-minute period. After all of the parallel branches finish executing, the Step Functions execution completes and another begins.
Follow these steps to create your Step Functions state machine:
- Open the Step Functions console.
- Choose Dashboard, Create State Machine.
- For State Machine Name, enter
WriteMetricFromStepFunction
. - Enter the state machine code below into the editor. Make sure that you insert your own AWS account ID for every instance of “676655494xxx”
- Choose Create State Machine.
- Select the WriteMetricFromStepFunction IAM role that you previously created.
{
"Comment": "Writes ECS metrics to CloudWatch every five seconds, for a one-minute period.",
"StartAt": "ParallelMetric",
"States": {
"ParallelMetric": {
"Type": "Parallel",
"Branches": [
{
"StartAt": "WriteMetricLambda",
"States": {
"WriteMetricLambda": {
"Type": "Task",
"Resource": "arn:aws:lambda:us-west-2:676655494xxx:function:WriteMetricFromStepFunction",
"End": true
}
}
},
{
"StartAt": "WaitFive",
"States": {
"WaitFive": {
"Type": "Wait",
"Seconds": 5,
"Next": "WriteMetricLambdaFive"
},
"WriteMetricLambdaFive": {
"Type": "Task",
"Resource": "arn:aws:lambda:us-west-2:676655494xxx:function:WriteMetricFromStepFunction",
"End": true
}
}
},
{
"StartAt": "WaitTen",
"States": {
"WaitTen": {
"Type": "Wait",
"Seconds": 10,
"Next": "WriteMetricLambda10"
},
"WriteMetricLambda10": {
"Type": "Task",
"Resource": "arn:aws:lambda:us-west-2:676655494xxx:function:WriteMetricFromStepFunction",
"End": true
}
}
},
{
"StartAt": "WaitFifteen",
"States": {
"WaitFifteen": {
"Type": "Wait",
"Seconds": 15,
"Next": "WriteMetricLambda15"
},
"WriteMetricLambda15": {
"Type": "Task",
"Resource": "arn:aws:lambda:us-west-2:676655494xxx:function:WriteMetricFromStepFunction",
"End": true
}
}
},
{
"StartAt": "Wait20",
"States": {
"Wait20": {
"Type": "Wait",
"Seconds": 20,
"Next": "WriteMetricLambda20"
},
"WriteMetricLambda20": {
"Type": "Task",
"Resource": "arn:aws:lambda:us-west-2:676655494xxx:function:WriteMetricFromStepFunction",
"End": true
}
}
},
{
"StartAt": "Wait25",
"States": {
"Wait25": {
"Type": "Wait",
"Seconds": 25,
"Next": "WriteMetricLambda25"
},
"WriteMetricLambda25": {
"Type": "Task",
"Resource": "arn:aws:lambda:us-west-2:676655494xxx:function:WriteMetricFromStepFunction",
"End": true
}
}
},
{
"StartAt": "Wait30",
"States": {
"Wait30": {
"Type": "Wait",
"Seconds": 30,
"Next": "WriteMetricLambda30"
},
"WriteMetricLambda30": {
"Type": "Task",
"Resource": "arn:aws:lambda:us-west-2:676655494xxx:function:WriteMetricFromStepFunction",
"End": true
}
}
},
{
"StartAt": "Wait35",
"States": {
"Wait35": {
"Type": "Wait",
"Seconds": 35,
"Next": "WriteMetricLambda35"
},
"WriteMetricLambda35": {
"Type": "Task",
"Resource": "arn:aws:lambda:us-west-2:676655494xxx:function:WriteMetricFromStepFunction",
"End": true
}
}
},
{
"StartAt": "Wait40",
"States": {
"Wait40": {
"Type": "Wait",
"Seconds": 40,
"Next": "WriteMetricLambda40"
},
"WriteMetricLambda40": {
"Type": "Task",
"Resource": "arn:aws:lambda:us-west-2:676655494xxx:function:WriteMetricFromStepFunction",
"End": true
}
}
},
{
"StartAt": "Wait45",
"States": {
"Wait45": {
"Type": "Wait",
"Seconds": 45,
"Next": "WriteMetricLambda45"
},
"WriteMetricLambda45": {
"Type": "Task",
"Resource": "arn:aws:lambda:us-west-2:676655494xxx:function:WriteMetricFromStepFunction",
"End": true
}
}
},
{
"StartAt": "Wait50",
"States": {
"Wait50": {
"Type": "Wait",
"Seconds": 50,
"Next": "WriteMetricLambda50"
},
"WriteMetricLambda50": {
"Type": "Task",
"Resource": "arn:aws:lambda:us-west-2:676655494xxx:function:WriteMetricFromStepFunction",
"End": true
}
}
},
{
"StartAt": "Wait55",
"States": {
"Wait55": {
"Type": "Wait",
"Seconds": 55,
"Next": "WriteMetricLambda55"
},
"WriteMetricLambda55": {
"Type": "Task",
"Resource": "arn:aws:lambda:us-west-2:676655494xxx:function:WriteMetricFromStepFunction",
"End": true
}
}
}
],
"End": true
}
}
}
Now you’ve got a shiny new Step Functions state machine! However, you might ask yourself, “After the state machine has been created, how does it get executed?” Before I answer that question, create the Lambda function that writes the custom metric, and then you get the end-to-end process moving.
Create a Lambda function
The meaty part of the solution is a Lambda function, written to consume the Python 3.6 runtime, that retrieves metric values from ECS, and then writes them to CloudWatch. This Lambda function is what the Step Functions state machine is triggering every five seconds, via the Task states. Key points to remember:
The Lambda function needs permission to:
- Write CloudWatch metrics (PutMetricData API).
- Retrieve metrics from ECS clusters (DescribeCluster API).
- Write StdOut to CloudWatch Logs.
Boto3, the AWS SDK for Python, is included in the Lambda execution environment for Python 2.x and 3.x.
Because Lambda includes the AWS SDK, you don’t have to worry about packaging it up and uploading it to Lambda. You can focus on writing code and automatically take a dependency on boto3.
As for permissions, you’ve already created the IAM role and attached a policy to it that enables your Lambda function to access the necessary API actions. When you create your Lambda function, make sure that you select the correct IAM role, to ensure it is invoked with the correct permissions.
The following Lambda function code is generic. So how does the Lambda function know which ECS cluster to gather metrics for? Your Step Functions state machine automatically passes in its state to the Lambda function. When you create your CloudWatch Events rule, you specify a simple JSON object that passes the desired ECS cluster name into your Step Functions state machine, which then passes it to the Lambda function.
Use the following property values as you create your Lambda function:
Function Name:
WriteMetricFromStepFunction
Description: This Lambda function retrieves metric values from an ECS cluster and writes them to Amazon CloudWatch.
Runtime: Python3.6
Memory: 128 MB
IAM Role: WriteMetricFromStepFunction
import boto3
def handler(event, context):
cw = boto3.client('cloudwatch')
ecs = boto3.client('ecs')
print('Got boto3 client objects')
Dimension = {
'Name': 'ClusterName',
'Value': event['ECSClusterName']
}
cluster = get_ecs_cluster(ecs, Dimension['Value'])
cw_args = {
'Namespace': 'ECS',
'MetricData': [
{
'MetricName': 'RunningTask',
'Dimensions': [ Dimension ],
'Value': cluster['runningTasksCount'],
'Unit': 'Count',
'StorageResolution': 1
},
{
'MetricName': 'PendingTask',
'Dimensions': [ Dimension ],
'Value': cluster['pendingTasksCount'],
'Unit': 'Count',
'StorageResolution': 1
},
{
'MetricName': 'ActiveServices',
'Dimensions': [ Dimension ],
'Value': cluster['activeServicesCount'],
'Unit': 'Count',
'StorageResolution': 1
},
{
'MetricName': 'RegisteredContainerInstances',
'Dimensions': [ Dimension ],
'Value': cluster['registeredContainerInstancesCount'],
'Unit': 'Count',
'StorageResolution': 1
}
]
}
cw.put_metric_data(**cw_args)
print('Finished writing metric data')
def get_ecs_cluster(client, cluster_name):
cluster = client.describe_clusters(clusters = [ cluster_name ])
print('Retrieved cluster details from ECS')
return cluster['clusters'][0]
Create the CloudWatch Events rule
Now you’ve created an IAM role and policy, Step Functions state machine, and Lambda function. How do these components actually start communicating with each other? The final step in this process is to set up a CloudWatch Events rule that triggers your metric-gathering Step Functions state machine every minute. You have two choices for your CloudWatch Events rule expression: rate or cron. In this example, use the cron expression.
A couple key learning points from creating the CloudWatch Events rule:
- You can specify one or more targets, of different types (for example, Lambda function, Step Functions state machine, SNS topic, and so on).
- You’re required to specify an IAM role with permissions to trigger your target.
NOTE: This applies only to certain types of targets, including Step Functions state machines. - Each target that supports IAM roles can be triggered using a different IAM role, in the same CloudWatch Events rule.
- Optional: You can provide custom JSON that is passed to your target Step Functions state machine as input.
Follow these steps to create the CloudWatch Events rule:
- Open the CloudWatch console.
- Choose Events, Rules, Create Rule.
- Select Schedule, Cron Expression, and then enter the following rule:
0/1 * * * ? *
- Choose Add Target, Step Functions State Machine, WriteMetricFromStepFunction.
- For Configure Input, select Constant (JSON Text).
- Enter the following JSON input, which is passed to Step Functions, while changing the cluster name accordingly:
{ "ECSClusterName": "ECSEsgaroth" }
- Choose Use Existing Role, WriteMetricFromStepFunction (the IAM role that you previously created).
After you’ve completed with these steps, your screen should look similar to this:
Validate the solution
Now that you have finished implementing the solution to gather high-resolution metrics from ECS, validate that it’s working properly.
- Open the CloudWatch console.
- Choose Metrics.
- Choose custom and select the ECS namespace.
- Choose the ClusterName metric dimension.
You should see your metrics listed below.
Troubleshoot configuration issues
If you aren’t receiving the expected ECS cluster metrics in CloudWatch, check for the following common configuration issues. Review the earlier procedures to make sure that the resources were properly configured.
- The IAM role’s trust relationship is incorrectly configured.
Make sure that the IAM role trusts Lambda, CloudWatch Events, and Step Functions in the correct region. - The IAM role does not have the correct policies attached to it.
Make sure that you have copied the IAM policy correctly as an inline policy on the IAM role. - The CloudWatch Events rule is not triggering new Step Functions executions.
Make sure that the target configuration on the rule has the correct Step Functions state machine and IAM role selected. - The Step Functions state machine is being executed, but failing part way through.
Examine the detailed error message on the failed state within the failed Step Functions execution. It’s possible that the - IAM role does not have permissions to trigger the target Lambda function, that the target Lambda function may not exist, or that the Lambda function failed to complete successfully due to invalid permissions.
Although the above list covers several different potential configuration issues, it is not comprehensive. Make sure that you understand how each service is connected to each other, how permissions are granted through IAM policies, and how IAM trust relationships work.
Conclusion
In this post, you implemented a Serverless solution to gather and record high-resolution application metrics from containers running on Amazon ECS into CloudWatch. The solution consists of a Step Functions state machine, Lambda function, CloudWatch Events rule, and an IAM role and policy. The data that you gather from this solution helps you rapidly identify issues with an ECS cluster.
To gather high-resolution metrics from any service, modify your Lambda function to gather the correct metrics from your target. If you prefer not to use Python, you can implement a Lambda function using one of the other supported runtimes, including Node.js, Java, or .NET Core. However, this post should give you the fundamental basics about capturing high-resolution metrics in CloudWatch.
If you found this post useful, or have questions, please comment below.