AWS Compute Blog
Introducing the AWS Lambda Telemetry API
This blog post is written by Anton Aleksandrov, Principal Solution Architect and Shridhar Pandey, Senior Product Manager
Today AWS is announcing the AWS Lambda Telemetry API. This provides an easier way to receive enhanced function telemetry directly from the Lambda service and send it to custom destinations. This makes it easier for developers and operators using third-party observability extensions to monitor and observe their Lambda functions.
Previously, observability tools, running as Lambda extensions, could use the Lambda Logs API to collect logs generated by the function code and the Lambda service. But they lacked the ability to collect additional telemetry like traces and performance metrics generated by Lambda service during initialization and invocation of functions. This meant customers using third-party tools had to use workarounds or bespoke libraries to gather these additional insights.
The Lambda Telemetry API is an enhanced version of Logs API, which enables extensions to receive platform events, traces, and metrics directly from Lambda in addition to logs. This enables tooling vendors to collect enriched telemetry from their extensions, and egress it to any destination.
Today you can use Telemetry API-enabled extensions to send telemetry data to Coralogix, Datadog, Dynatrace, Lumigo, New Relic, Sedai, Site24x7, Serverless.com, Sumo Logic, Sysdig, Thundra, or your own custom destinations.
Overview
To receive logs, extensions subscribe using the new Lambda Telemetry API.
The Lambda service then streams the telemetry events directly to the extension. The events include platform events, trace spans, function and extension logs, and additional Lambda platform metrics. The extension can then process, filter, and route them to any preferred destination.
You can add an extension from the tooling provider of your choice to your Lambda function. You can deploy extensions, including ones that use the Telemetry API, as Lambda layers, with the AWS Management Console and AWS Command Line Interface (AWS CLI). You can also use infrastructure as code tools such as AWS CloudFormation, the AWS Serverless Application Model (AWS SAM), Serverless Framework, and Terraform.
Lambda Extensions from the AWS Partner Network (APN) available at launch
Today, you can use Lambda extensions that use Telemetry API from the following Lambda partners:
- The Coralogix AWS Lambda Telemetry Exporter extension now offers improved monitoring and alerting for Lambda functions by further streamlining collection and correlation of logs, metrics, and traces.
- The Datadog extension further simplifies how you visualize the impact of cold starts, and monitor and alert on latency, duration, and payload size of your Lambda functions by collecting logs, traces, and real-time metrics from your function.
- Dynatrace now provides enhanced observability for AWS Lambda functions with simplified instrumentation which is easy to scale.
- The Lumigo lambda-log-shipper extension simplifies aggregating and forwarding Lambda logs to third-party tools. It now also makes it easy for you to detect Lambda function timeouts.
- The New Relic extension now provides a unified observability view for your Lambda functions with insights that help you better understand and optimize the performance of your functions.
- Sedai now uses the Telemetry API to help you improve the performance and availability of your Lambda functions by gathering insights about your function and providing recommendations for manual and autonomous remediation.
- The Site24x7 extension now offers new metrics, which enable you to get deeper insights into the different phases of the Lambda function lifecycle, such as initialization and invocation.
- Serverless.com now uses the Telemetry API to provide real-time performance details for your Lambda function through the Dev Mode feature of their new Serverless Console V.2 offering, which simplifies debugging in the AWS Cloud.
- Sumo Logic now makes it easier, and faster for you to get your mission-critical Lambda function telemetry sent directly to Sumo Logic so you could quickly analyze and remediate errors and exceptions.
- The Sysdig Monitor extension generates and collects real-time metrics directly from the Lambda platform, enabling you to achieve reduced MTTR (mean time to resolution) while monitoring your serverless applications.
- The Thundra extension enables you to export logs, metrics, and events for Lambda execution environment lifecycle events emitted by the Telemetry API to a destination of your choice such as an S3 bucket, a database, or a monitoring backend.
Seeing example Telemetry API extensions in action
This demo shows an example of using a telemetry extension to receive telemetry, batch, and send it to a desired destination.
To set up the example, visit the GitHub repo for the extension implemented in the language of your choice and follow the instructions in the README.md file.
To configure the batching behavior, which controls when the extension sends the data, set the Lambda environment variable DISPATCH_MIN_BATCH_SIZE. When the extension receives the batch threshold, it POSTs the telemetry events batch to the destination specified in the DISPATCH_POST_URI environment variable.
You can configure an example DISPATCH_POST_URL for the extension to deliver the telemetry data using https://webhook.site/.
Telemetry events for one invoke may be received and processed during the next invocation. Events for the last invoke may be processed during the SHUTDOWN event.
Test and invoke the function from the Lambda console, or AWS CLI. You can see that the webhook receives the telemetry data.
You can also view the function and extension logs in CloudWatch Logs. The example extension includes verbose logging to understand the extension lifecycle.
Sample Telemetry API events
When the extension receives telemetry data, each event contains a JSON dictionary with additional information, such as related metrics or trace spans. The following example shows a function initialization event. You can see that the function initializes with on-demand concurrency. The initialization is successful and the initialization duration is 123 milliseconds.
{
"time": "2022-08-02T12:01:23.521Z",
"type": "platform.initStart",
"record": {
"initializationType": "on-demand",
"phase":"init"
}
}
{
"time": "2022-08-02T12:01:23.521Z",
"type": "platform.initRuntimeDone",
"record": {
"initializationType": "on-demand",
"status": "success"
}
}
{
"time": "2022-08-02T12:01:23.521Z",
"type": "platform.initReport",
"record": {
"initializationType": "on-demand",
"phase":"init",
"metrics": {
"durationMs": 123.0,
}
}
}
Function invocation events include the associated requestId and tracing information connecting this invocation with the X-Ray tracing context, and platform spans showing response latency and response duration as well as invocation metrics such as duration in milliseconds.
{
"time": "2022-08-02T12:01:23.521Z",
"type": "platform.start",
"record": {
"requestId": "e6b761a9-c52d-415d-b040-7ba94b9452f3",
"version": "$LATEST",
"tracing": {
"spanId": "54565fb41ac79632",
"type": "X-Amzn-Trace-Id",
"value": "Root=1-62e900b2-710d76f009d6e7785905449a;Parent=0efbd19962d95b05;Sampled=1"
}
}
}
{
"time": "2022-08-02T12:01:23.521Z",
"type": "platform.runtimeDone",
"record": {
"requestId": "e6b761a9-c52d-415d-b040-7ba94b9452f3",
"status": "success",
"tracing": {
"spanId": "54565fb41ac79632",
"type": "X-Amzn-Trace-Id",
"value": "Root=1-62e900b2-710d76f009d6e7785905449a;Parent=0efbd19962d95b05;Sampled=1"
},
"spans": [
{
"name": "responseLatency",
"start": "2022-08-02T12:01:23.521Z",
"durationMs": 23.02
},
{
"name": "responseDuration",
"start": "2022-08-02T12:01:23.521Z",
"durationMs": 20
}
],
"metrics": {
"durationMs": 200.0,
"producedBytes": 15
}
}
}
{
"time": "2022-08-02T12:01:23.521Z",
"type": "platform.report",
"record": {
"requestId": "e6b761a9-c52d-415d-b040-7ba94b9452f3",
"metrics": {
"durationMs": 220.0,
"billedDurationMs": 300,
"memorySizeMB": 128,
"maxMemoryUsedMB": 90,
"initDurationMs": 200.0
},
"tracing": {
"spanId": "54565fb41ac79632",
"type": "X-Amzn-Trace-Id",
"value": "Root=1-62e900b2-710d76f009d6e7785905449a;Parent=0efbd19962d95b05;Sampled=1"
}
}
}
Building a Telemetry API extension
Lambda extensions run as independent processes in the execution environment and continue to run after the function invocation is fully processed. Because extensions run as separate processes, you can write them in a language different from the function code. We recommend implementing extensions using a compiled language as a self-contained binary. This makes the extension compatible with all the supported runtimes.
Extensions that use the Telemetry API have the following lifecycle.
- The extension registers itself using the Lambda Extension API and subscribes to receive
INVOKE
andSHUTDOWN
events. With the Telemetry API, the registration response body contains additional information, such as function name, function version, and account ID. - The extensions start a telemetry listener. This is a local HTTP or TCP endpoint. We recommend using HTTP rather than TCP.
- The extensions use the Telemetry API to subscribe to desired telemetry event streams.
- The Lambda service POSTs telemetry stream data to your telemetry listener. We recommend batching the telemetry data as it arrives to the listener. You can perform any custom processing on this data and send it on to an S3 bucket, other custom destination, or an external observability service.
See the Telemetry API documentation and sample extensions for additional details.
The Lambda Telemetry API supersedes the Lambda Logs API. While the Logs API remains fully functional, AWS recommends using the Telemetry API. New functionality is only available with the Extensions API. Extensions can only subscribe to either the Logs or Telemetry API. After subscribing to one of them, any attempt to subscribe to the other returns an error.
Mapping Telemetry API schema to OpenTelemetry spans
The Lambda Telemetry API schema is semantically compatible with OpenTelemetry (OTEL). You can use events received from the Telemetry API to build and report OTEL spans. Three Telemetry API lifecycle events represent a single function invocation: start
, runtimeDone
, and runtimeReport
. You should represent this as a single OTEL span. You can add additional details to your spans using information available in runtimeDone
events under the event.spans
property.
Mapping of Telemetry API events to OTEL spans is described in the Telemetry API documentation.
Metrics and pricing
The Telemetry API introduces new per-invoke metrics to help you understand the impact of extensions on your function’s performance. The metrics are available within the report.runtimeDone
event.
platform.runtime
measures the time taken by the Lambda Runtime to run your function handler code.producedBytes
measures the number of bytes returned during the invoke phase.
There are also two new trace spans available within the report.runtimeDone
event:
responseLatencyMs
measures the time taken by the Runtime to send a response.responseDurationMs
measures the time taken by the Runtime to finish sending the response from when it starts streaming it.
Extensions using Telemetry API, like other extensions, share the same billing model as Lambda functions. When using Lambda functions with extensions, you pay for requests served, and the combined compute time used to run your code and all extensions, in 1-ms increments. To learn more about the billing for extensions, visit the Lambda pricing page.
Useful links
- AWS Lambda Extension API docs
- AWS Lambda Telemetry API docs
- Introducing Lambda Extensions blog
- AWS Lambda Extensions GA blog
- Lambda Extensions Deep-Dive video series on youtube.com
- AWS Lambda Extensions github.com samples
- AWS Lambda Telemetry API Extensions samples (Node.js, Python, Golang)
Conclusion
The Lambda Telemetry API enhances the functionality of the Logs API, enabling Lambda extensions to receive logs, platform traces, and invocation-level metrics directly from Lambda. This means developers and operators can now collect enhanced telemetry from Lambda using third-party monitoring and observability tools they are already familiar with.
To see how the Telemetry API works, try the demos in the GitHub repository.
Build your own extensions using the Telemetry API today, or use extensions provided by the Lambda observability partners.
For more serverless learning resources, visit Serverless Land.