Containers
Fluent Bit for Amazon EKS on AWS Fargate is here
Akshay Ram, Prithvi Ramesh, Michael Hausenblas
In issue 701 of our containers roadmap we discussed supporting our CNCF Fluent Bit-based log router in the context of EKS on Fargate. In this blog post we provide you context on this new feature and walk you through the usage of it, shipping logs directly to CloudWatch with a few configuration steps.
Where previously you had to run a sidecar to route container logs from Amazon EKS pods running on AWS Fargate, you can now use a built-in log router. This means there are no sidecars to install or maintain. You simply select where you want to send your data and logs are routed to a destination of your choice.
We set to build this feature keeping two design tenets:
- Consistency where it matters by using native Kubernetes objects where applicable to give customers a consistent interface across compute types (EC2, managed node groups and Fargate).
- Simplify where possible by managing more infrastructure or add-ons for customers.
This led us to choose the Fluent Bit configuration language and the Kubernetes ConfigMap as the primary interface to configure logging as it is standard practice in Kubernetes clusters. We have simplified the lifecycle management of Fluent Bit by including it in the platform. Tell us where logs should go and let AWS manage the rest.
With the new built-in logging support, you select where you want to send your data and logs are routed to a destination of your choice. Under the hood, EKS on Fargate uses a version of Fluent Bit for AWS, an upstream conformant distribution of Fluent Bit managed by AWS.
In order to use Fluent Bit-based logging in EKS on Fargate you apply a ConfigMap to your Amazon EKS clusters using Fluent Bit’s configuration as a data value, defining where container logs will be shipped to. This logging ConfigMap has to be used in a fixed namespace called aws-observability
has a cluster-wide effect, meaning that you can send application-level logs from any application in any namespace. To define the log destination of your choice you use Fluent Bit’s configuration language. You can choose between CloudWatch, Elasticsearch, Kinesis Firehose and Kinesis Streams as outputs. Further, you can send logs to partner destinations like Datadog, Splunk, and more via Firehose or CloudWatch and we’re working on supporting partner output plugins directly, as well.
Sending logs to CloudWatch
In the following we show you how to use cloudwatch_logs (a FluentBit output plugin written in C) to send logs from a workload running in an EKS on Fargate cluster to CloudWatch.
First create an EKS on Fargate cluster as follows: store the following eksctl
configuration in a file called eks-cluster-config.yaml
:
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: fluentbit
region: eu-west-1
version: '1.18'
iam:
withOIDC: true
fargateProfiles:
- name: defaultfp
selectors:
- namespace: demo
- namespace: kube-system
cloudWatch:
clusterLogging:
enableTypes: ["*"]
And create the EKS cluster using:
eksctl create cluster -f eks-cluster-config.yaml
Note that in order for the EKS control plane and and Fargate profile creation to complete, this can take some 15 to 20min.
Next, create the dedicated aws-observability
namespace and the ConfigMap for Fluent Bit by saving the following content to a file called fluentbit
-config.yaml
:
kind: Namespace
apiVersion: v1
metadata:
name: aws-observability
labels:
aws-observability: enabled
---
kind: ConfigMap
apiVersion: v1
metadata:
name: aws-logging
namespace: aws-observability
data:
output.conf: |
[OUTPUT]
Name cloudwatch_logs
Match *
region eu-west-1
log_group_name fluent-bit-cloudwatch
log_stream_prefix from-fluent-bit-
auto_create_group On
In above YAML manifest, note the fluent-bit-cloudwatch
value as the name of the CloudWatch log group that is automatically created as soon as your apps start logging.
Now it’s time to create the Fluent Bit configuration using the following command:
kubectl apply -f fluentbit-config.yaml
We want to verify if the Fluent Bit ConfigMap is in place, so execute the following command and you should see a similar output:
$ kubectl -n aws-observability get cm
NAMESPACE NAME DATA AGE
aws-observability aws-logging 1 3h25m
With Fluent Bit set up we next need to give it the permission to write to CloudWatch. We do that by first downloading the policy locally:
curl -o permissions.json \
https://raw.githubusercontent.com/aws-samples/amazon-eks-fluent-logging-examples/mainline/examples/fargate/cloudwatchlogs/permissions.json
And next we create the policy and attach it to the pod execution role of your EKS on Fargate cluster:
aws iam create-policy \
--policy-name FluentBitEKSFargate \
--policy-document file://permissions.json
aws iam attach-role-policy \
--policy-arn arn:aws:iam::123456789012:policy/FluentBitEKSFargate \
--role-name eksctl-fluentbit-cluster-FargatePodExecutionRole-XXXXXXXXXX
In above command you have to replace eksctl-fluentbit-cluster-FargatePodExecutionRole-XXXXXXXXXX
with your own (for example via the EKS console → Fargate profile). The same applies for the account ID (123456789012)
in arn:aws:iam::123456789012:policy/FluentBitEKSFargate
which you have to replace with your own account ID.
With the logging infrastructure set up we can now generate logs that get shipped to CloudWatch.
First we create a service that generate logs based on HTTP interface interactions. Create a file called logger-server.yaml
and enter the following content:
apiVersion: apps/v1
kind: Deployment
metadata:
name: logger-server
spec:
selector:
matchLabels:
app: nginx
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: main
image: nginx:1.14.2
ports:
- containerPort: 80
Next, create the deployment, its pod and a corresponding service that routes traffic to it:
kubectl -n demo apply -f logger-server.yaml && kubectl -n demo expose deploy logger-server
Let’s verify if the pod that the logger-server
deployment manages in fact has logging enabled as we would expect:
# find the name of the logger-server pod:
$ kubectl -n demo get po -l app=nginx
NAME READY STATUS RESTARTS AGE
logger-server-fb8546f69-qflh2 1/1 Running 0 21h
# get details about the logger-server pod:
$ kubectl -n demo describe po/logger-server-fb8546f69-qflh2
Name: logger-server-fb8546f69-qflh2
Namespace: demo
Priority: 2000001000
Priority Class Name: system-node-critical
Node: fargate-ip-192-168-147-211.eu-west-1.compute.internal/192.168.147.211
Start Time: Wed, 25 Nov 2020 16:01:51 +0000
Labels: app=nginx
eks.amazonaws.com/fargate-profile=defaultfp
pod-template-hash=fb8546f69
Annotations: CapacityProvisioned: 0.25vCPU 0.5GB
Logging: LoggingEnabled
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal LoggingEnabled 2m7s fargate-scheduler Successfully enabled logging for pod
Normal Scheduled 44s fargate-scheduler Successfully assigned demo/logger-server-fb8546f69-qflh2 to fargate-ip-192-168-147-211.eu-west-1.compute.internal
Normal Pulling 45s kubelet Pulling image "nginx:1.14.2"
Normal Pulled 40s kubelet Successfully pulled image "nginx:1.14.2"
Normal Created 39s kubelet Created container main
Normal Started 39s kubelet Started container main
That looks great. We can see the confirmation in the events that says Successfully enabled logging for pod
and that means we should be good to move on to the next and final step: generate logs.
To cause the logger-server
to generate logs we forward the service to our local environment and use curl
as the client to issue GET
requests; in addition we watch the logs locally. Let’s do that, using three terminals:
# [terminal 1] forward the logger-server traffic locally:
$ kubectl -n demo port-forward svc/logger-server 8080:80
Forwarding from 127.0.0.1:8080 -> 80
Forwarding from [::1]:8080 -> 80
Handling connection for 8080
# [terminal 2] watch logs locally:
$ kubectl -n demo logs deploy/logger-server -f
127.0.0.1 - - [25/Nov/2020:16:03:41 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.64.1" "-"
127.0.0.1 - - [25/Nov/2020:16:03:42 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.64.1" "-"
127.0.0.1 - - [25/Nov/2020:16:03:43 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.64.1" "-"
# [terminal 3] generate HTTP traffic:
$ curl localhost:8080
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...
The log group name we specified in the Fluent Bit config map is fluent-bit-cloudwatch
so let’s check that in the CloudWatch console:
Hurray, all worked as expected! Before we wrap up, some more tips and tricks to successfully using Fluent Bit on EKS Fargate.
Usage consideration
We suggest you factor for some additional resources to be used by the log router. Our tests show that you should factor up to 50 MB memory as suggested headroom. To do that, add resources to your application pod via resource requests as per docs as the Fluent Bit process runs alongside it. If you expect your application to generate logs at very high throughput, you should factor up to 100 MB as suggested headroom.
Depending on your plug-in (log destination) of choice, you pay for log ingestion and storage costs separately, for example refer to the CloudWatch pricing page for details there.
At launch we support the following Kubernetes and EKS platform versions:
- Kubernetes v1.15: platform version eks.6
- Kubernetes 1.16: platform version eks.5
- Kubernetes 1.17: platform version eks.5
- Kubernetes1.18: platform version eks.3
Let us know your experience with this new integrated logging experience and suggestions via our container roadmap and watch out for re:Invent 2020 sessions for more details and announcements.