AWS Storage Blog
Persistent storage for container logging using Fluent Bit and Amazon EFS
UPDATE 9/8/2021: Amazon Elasticsearch Service has been renamed to Amazon OpenSearch Service. See details.
Logging is a powerful debugging mechanism for developers and operations teams when they must troubleshoot issues. Containerized applications write logs to standard output, which is redirected to local ephemeral storage, by default. These logs are lost when the container is terminated and are not available to troubleshoot issues unless they are stored on persistent storage or routed successfully to another centralized logging destination.
In this blog, I cover how to persist logs from your Amazon Elastic Kubernetes Service (Amazon EKS) containers on highly durable, highly available, elastic, and POSIX-compliant Amazon Elastic File System (Amazon EFS) file systems. We explore two different use cases:
- Use case 1: Persist your application logs directly on an Amazon EFS file system when default standard output (stdout) cannot be used. This use case applies to:
- Applications running on AWS Fargate for Amazon EKS. Fargate requires applications to write logs to a file system instead of stdout.
- Traditional applications that are containerized and need the ability to write application logs to a file.
- Use case 2: Persist your container logs centrally on an Amazon EFS file system using the Fluent Bit file plugin. When you are routing your container logs using Fluent Bit to external sources like Elasticsearch for centralized logging, there could be risk of losing the logs when these external sources are under heavy load, or must be restarted. Storing these logs on EFS provides developers and operations teams peace of mind, as they know a copy of their logs are available on EFS.
Using the Amazon EFS Container Storage Interface (CSI) driver, now generally available, EFS enables customers to persist data and state from their containers running in Amazon EKS. EFS provides fully managed, elastic, highly available, scalable, and high-performance, cloud-native shared file systems. Amazon EFS provides shared persistent storage that can scale automatically and enables deployment of highly available applications that have access to the same shared data across all Availability Zones in the Region. If a Kubernetes pod is shut down and relaunched, the CSI driver reconnects the EFS file system, even if the pod is relaunched in a different Availability Zone.
Using our fully managed Amazon EKS, AWS makes it easy to run Kubernetes without needing to install and operate your own Kubernetes control plane or worker nodes. EKS runs Kubernetes control plane instances across multiple Availability Zones, to ensure high availability. It also automatically detects and replaces unhealthy control plane instances, and provides automated version upgrades and patching for them. In addition, with our recently launched support for Amazon EFS file systems on AWS Fargate, EKS pods running on AWS Fargate can now mount EFS file systems using the EFS CSI driver.
Fluent Bit is an open source log shipper and processor, to collect data from multiple sources and forward it to different destinations for further analysis. Fluent Bit is a lightweight and performant log shipper and forwarder that is a successor to Fluentd. Fluent Bit is a part of the Fluentd Ecosystem but uses much fewer resources. It creates a tiny footprint on your system’s memory. You can route logs to Amazon CloudWatch, Amazon Elasticsearch Service, Amazon Redshift, and a wide range of other destinations supported by the Fluent Bit.
Prerequisites:
Before we dive into our use cases, let’s review the prerequisites. These steps should be completed:
- Installed the aws-iam-authenticator
- Installed the Kubernetes command line utility kubectl version 1.14 or later
- Installed eksctl a simple command line utility for creating and managing Kubernetes clusters on Amazon EKS)
- Basic understanding of Kubernetes and Amazon EFS
- Basic understanding of log shipping and forwarding using Fluent Bit
- Version 1.18.17 or later of the AWS CLI installed (to install or upgrade the AWS CLI, see this documentation on installing the AWS CLI)
If you are new to Fluent Bit, I recommend reading the following blogs from my fellow colleagues:
Use case 1: Persist your application logs directly on an Amazon EFS file system when default stdout cannot be used
As mentioned earlier, containers are ephemeral and logs written to local storage are lost when the container shuts down. By using Amazon EFS, you can persist your application logs from your AWS Fargate or Amazon EKS containers. You can then use Fluent Bit to collect those logs and forward them to your own log management server or external sources like Amazon CloudWatch, Elasticsearch, etc.
Configure the required infrastructure
Once you have the preceding prerequisites ready, you can start deploying an Amazon EKS cluster and creating a new Amazon EFS file system.
1. Deploy an Amazon EKS cluster
Deploy an Amazon EKS cluster using the following command. This command creates an Amazon EKS cluster in the us-east-1 Region with one node group containing c5.large node.
$ eksctl create cluster --name fluent-bit-efs-demo --region us-west-2 --zones us-west-2a,us-west
-2b --nodegroup-name fluent-bit-efs-demo-workers --node-type c5.large --nodes 1 --nodes-min 1
It takes approximately 15–20 minutes to provision the new cluster. When the cluster is ready, you can check the status by running:
$ kubectl get svc
The following is the output from the preceding command:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 8m36s
2. Create an Amazon EFS file system
2.1. First, get the VPC ID for the Amazon EKS cluster we created in step 1.
$ CLUSTER_NAME="fluent-bit-efs-demo"
$ VPCID=$( aws ec2 describe-vpcs --filters "Name=tag:Name,Values=eksctl-${CLUSTER_NAME}-
cluster/VPC" --query "Vpcs[0].VpcId" --output text)
2.2. Identify the subnet IDs for your Amazon EKS node group:
$ aws ec2 describe-subnets --filters "Name=tag:Name, Values=eksctl-${CLUSTER_NAME}-
cluster/SubnetPrivateUSWEST2*" --query "Subnets[*].SubnetId" --output text
2.3. Create a security group for your Amazon EFS mount target:
$ SECURITY_GROUP_CIDR=$(aws ec2 describe-vpcs --filters "Name=tag:Name,Values=eksctl-
${CLUSTER_NAME}-cluster/VPC" --query "Vpcs[0].CidrBlock" --output text)
$ SECURITY_GROUP_ID=$(aws ec2 create-security-group --group-name fluent-bit-efs-security-
group --vpc-id ${VPCID} --description "EFS Security Group for fluent bit demo" --query
"GroupId" --output text)
2.4. To authorize inbound access to the security group for the Amazon EFS mount to allow inbound traffic to the NFS port (2049) from the VPC CIDR block, use the following command:
$ aws ec2 authorize-security-group-ingress --group-id ${SECURITY_GROUP_ID} --protocol tcp --
port 2049 --cidr ${SECURITY_GROUP_CIDR}
2.5. Create an Amazon EFS file system by running the following command:
$ aws efs create-file-system --creation-token creation-token --performance-mode
generalPurpose --throughput-mode bursting --region us-west-2 --tags Key=Name,Value=EKS-
Peristent-storage-FluentBit
2.6. Create Amazon EFS mount targets using the subnet IDs identified in step 2.2.
$ aws efs create-mount-target --file-system-id fs-56ee5a53 --subnet-id subnet-
0d2f2cf8ccef8af2d --security-group $SECURITY_GROUP_ID --region us-west-2
Repeat step 2.6 for the second subnet ID.
$ aws efs create-mount-target --file-system-id fs-56ee5a53 --subnet-id subnet-
0b4896ce697890747 --security-group $SECURITY_GROUP_ID --region us-west-2
2.7. Create an Amazon EFS access point. Amazon EFS access points are application-specific entry points into an EFS file system that make it easier to manage application access to shared datasets. Access points enable you to enforce a user identify based on the POSIX UID/GID specified. Create an EFS access point and enforce a UID/GID of 1001:1001 using the following command:
$ aws efs create-access-point --file-system-id fs-56ee5a53 --posix-user Uid=1001,Gid=1001 --
root-directory "Path=/fluentbit,CreationInfo={OwnerUid=1001,OwnerGid=1001,Permissions=755}"
3. Deploy the Amazon EFS CSI driver to your Amazon EKS cluster
3.1. The CSI driver currently supports static provisioning for Amazon EFS. This means that an EFS file system must be created manually first, as outlined in step 2 earlier. This step is not required for using Fargate, as the CSI driver is already installed in the Fargate stack and support for EFS is provided out of the box. Run the following command to deploy the stable version of CSI driver. Encryption in-transit is enabled by default when using CSI driver version 1.0:
$ kubectl apply -k "github.com/kubernetes-sigs/aws-efs-csi-
driver/deploy/kubernetes/overlays/stable?ref=master"
3.2. Verify that the CSI driver is successfully deployed:
$ kubectl get pod -n kube-system | grep -i csi
efs-csi-node-fp84p 3/3 Running 0 3m4s
4. Create a storage class, persistent volume (PV), and persistent volume claim (PVC):
4.1. First create a storage class by running the following command:
$ cat storageclass.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: efs-sc
provisioner: efs.csi.aws.com
4.2. Next, create a persistent volume (PV). Here, specify the Amazon EFS file system and access point created for use case 1:
$ curl -o pv.yaml https://raw.githubusercontent.com/kubernetes-sigs/aws-efs-csi-driver/master/examples/kubernetes/multiple_pods/specs/pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: efs-pv
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: efs-sc
csi:
driver: efs.csi.aws.com
volumeHandle: fs-56ee5a53::fsap-08e12b7694ed54eb9
Replace volumeHandle
value with the Amazon EFS file system ID and EFS access point.
4.3. Next, create a persistent volume claim:
$ curl -o claim.yaml https://raw.githubusercontent.com/kubernetes-sigs/aws-efs-csi-driver/master/examples/kubernetes/multiple_pods/specs/claim.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: efs-persistent-claim
spec:
accessModes:
- ReadWriteMany
storageClassName: efs-sc
resources:
requests:
storage: 5Gi
Note: Because Amazon EFS is an elastic file system, it does not enforce any file system capacity limits. The actual storage capacity value in persistent volumes and persistent volume claims is not used when creating the file system. However, since storage capacity is a required field in Kubernetes, you must specify a valid value, such as, 5Gi in this example. This value does not limit the size of your Amazon EFS file system.
4.4. Deploy the storage class, persistent volume, and persistent volume claim as shown here:
$ kubectl apply -f storageclass.yaml
$ kubectl apply -f pv.yaml
$ kubectl apply -f claim.yaml
4.5. Check that the storage class, persistent volume, and persistent volume claims were created using the following command:
$ kubectl get sc,pv,pvc
The following is the output from the preceding command:
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
storageclass.storage.k8s.io/efs-sc efs.csi.aws.com Delete Immediate false 62s
storageclass.storage.k8s.io/gp2 (default) kubernetes.io/aws-ebs Delete WaitForFirstConsumer false 45m
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/efs-pv 5Gi RWX Retain Bound default/efs-persistent-claim efs-sc 47s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/efs-persistent-claim Bound efs-pv 5Gi RWX efs-
5. Deploy the application
In this step, I am deploying a single application that continuously writes current date to /var/log/app.log. I am mounting my Amazon EFS file system on /var/log to persist logs from my application on durable EFS storage.
5.1. Create a file called yaml and copy the following code. Replace the claimName with the PVC you created in step 4.3 and 4.4.
apiVersion: v1
kind: Pod
metadata:
name: efs-app
spec:
containers:
- name: app
image: centos
command: ["/bin/sh"]
args: ["-c", "while true; do echo $(date -u) >> /var/log/app.log; sleep 5; done"]
volumeMounts:
- name: persistent-storage
mountPath: /var/log
volumes:
- name: persistent-storage
persistentVolumeClaim:
claimName: efs-persistent-claim
5.2. Deploy the application by running:
$ kubectl apply -f app.yaml
5.3. Check that the pod was successfully created by running:
$ kubectl get pods
You should see the following output:
NAME READY STATUS RESTARTS AGE
efs-app 1/1 Running 0 107s
If your pod fails to start, you can troubleshoot by running the following command:
$ kubectl describe pod efs-app
5.4. Now, verify that your application is successfully creating the log file by running:
$ kubectl exec -ti efs-app -- tail -f /var/log/app.log
Fri Aug 7 19:59:57 UTC 2020
Fri Aug 7 20:00:02 UTC 2020
Fri Aug 7 20:00:07 UTC 2020
Fri Aug 7 20:00:12 UTC 2020
Fri Aug 7 20:00:17 UTC 2020
Fri Aug 7 20:00:22 UTC 2020
Fri Aug 7 20:00:27 UTC 2020
Fri Aug 7 20:00:32 UTC 2020
Fri Aug 7 20:00:37 UTC 2020
Fri Aug 7 20:00:42 UTC 2020
A look into my Amazon EFS file system metrics shows the write activity from my application. This confirms that my logs are successfully stored in EFS and I no longer need to worry about these logs getting lost if my pod is shut down:
With the application logs safely persisted on Amazon EFS, you can now use Fluent Bit to collect and transfer these logs to your own log management solution. The logs can also be sent to other external sources for further analysis. You can learn how to forward these logs to Amazon CloudWatch in this blog.
Use case 2: Persist your container logs centrally on an Amazon EFS file system using the Fluent Bit file plugin
For the second use case, I configure the Fluent Bit file output plugin to write our Amazon EKS container logs to a file on Amazon EFS. I walk through setting up Fluent Bit as the log processor that collects all the stdout from all the pods in Kubernetes and outputs them to your file system on Amazon EFS. If your logs are lost under heavy load while being forwarded to external source like Elasticsearch, you can have peace of mind knowing a copy of them is available on Amazon EFS.
You can enable a lifecycle policy to transition the logs from the Amazon EFS Standard storage class to the EFS Infrequently Accessed (IA) storage class to reduce costs by up to 92%.
The following is the example configuration for the file output plugin:
[OUTPUT]
Name file
Match *
Path /data
1. Create an Amazon EFS file system:
Since I am creating a separate namespace for this use case, I need a new Amazon EFS volume. Repeat step 2 in the “Configure the required infrastructure” section to create a new EFS file system and EFS access point.
2. Create a new namespace to deploy Fluent Bit DaemonSet:
First, create a namespace called “fluent-bit-efs-demo” to keep out deployment separate from other applications running in the Kubernetes cluster.
$ kubectl create namespace fluent-bit-efs-demo
Next, create a service account fluent-bit-efs
in the fluent-bit-efs-demo
namespace to provide permissions to Fluent Bit to collect logs from Kubernetes cluster components and applications running on the cluster.
In the ClusterRole
allow permissions to get, list, and watch namespaces and pods in your Kubernetes cluster. Bind the ServiceAccount
to the ClusterRole
using the ClusterRoleBinding
resource.
apiVersion: v1
kind: ServiceAccount
metadata:
name: fluent-bit-efs
namespace: fluent-bit-efs-demo
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: fluentbit-log-reader
rules:
- apiGroups: [""]
resources:
- namespaces
- pods
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: fluentbit-log-reader
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: fluentbit-log-reader
subjects:
- kind: ServiceAccount
name: fluent-bit-efs
namespace: fluent-bit-efs-demo
Copy the preceding code to a file named rbac.yaml and create the resources by executing the following command:
$ kubectl apply -f rbac.yaml
3. Configure your Amazon EFS file system using the CSI driver
3.1. Download the spec to create a persistent volume (PV). Here specify a PV name, and the Amazon EFS file system and access point created earlier:
$ curl -o pvc.yaml https://raw.githubusercontent.com/kubernetes-sigs/aws-efs-csi-driver/master/examples/kubernetes/multiple_pods/specs/pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: efs-central-logging-pv
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: efs-sc
csi:
driver: efs.csi.aws.com
volumeHandle: fs-8bc6728e::fsap-0f886cb6996e22374
Replace volumeHandle
value with the EFS file system ID and access point.
3.2. Next, download the spec to create a persistent volume claim:
$ curl -o claim.yaml https://raw.githubusercontent.com/kubernetes-sigs/aws-efs-csi-
driver/master/examples/kubernetes/multiple_pods/specs/claim.yaml
Update the name and namespace as shown here:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: efs-central-logging-claim
namespace: fluent-bit-efs-demo
spec:
accessModes:
- ReadWriteMany
storageClassName: efs-sc
resources:
requests:
storage: 5Gi
3.3. Next, deploy the persistent volume and persistent volume claim as shown here:
$ kubectl apply -f storageclass.yaml
storageclass.storage.k8s.io/efs-sc created
$ kubectl apply -f pv.yaml
persistentvolume/eks-persistent-storage created
$ kubectl apply -f claim.yaml
persistentvolumeclaim/efs-persistent-claim created
3.4. Check persistent volume, and persistent volume claims were created using the following command:
$ kubectl get sc,pv,pvc
4. Create a Fluent Bit config map:
4.1. Next, Create a config map file file-configmap.yaml to define the log parsing and routing for Fluent Bit:
apiVersion: v1
kind: ConfigMap
metadata:
name: fluent-bit-config-efs
namespace: fluent-bit-efs-demo
labels:
k8s-app: fluent-bit
data:
fluent-bit.conf: |
[SERVICE]
Parsers_File parsers.conf
Log_Level info
[INPUT]
Name tail
Tag kube.*
Path /var/log/containers/*.log
Parser docker
DB /var/log/flb_kube.db
Mem_Buf_Limit 5M
Skip_Long_Lines On
Refresh_Interval 10
[FILTER]
Name parser
Match **
Parser nginx
Key_Name log
[OUTPUT]
Name file
Match *
Path /data
parsers.conf: |
[PARSER]
Name nginx
Format regex
Regex ^(?<remote>[^ ]*) (?<host>[^ ]*) (?<user>[^ ]*) \[(?<time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^\"]*?)(?: +\S*)?)?" (?<code>[^ ]*) (?<size>[^ ]*)(?: "(?<referer>[^\"]*)" "(?<agent>[^\"]*)")? \"-\"$
Time_Key time
Time_Format %d/%b/%Y:%H:%M:%S %z
[PARSER]
Name docker
Format json
Time_Key time
Time_Format %Y-%m-%dT%H:%M:%S.%L
Time_Keep On
Decode_Field_As escaped log
4.2. Deploy the config map by running:
$ kubectl apply -f file-configmap.yaml
4.3. Next, define the Kubernetes DaemonSet using the config map in a file called file-fluent-bit-daemonset.yaml.
daemonset.yaml.
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentbit
namespace: fluent-bit-efs-demo
labels:
app.kubernetes.io/name: fluentbit
spec:
selector:
matchLabels:
name: fluentbit
template:
metadata:
labels:
name: fluentbit
spec:
serviceAccountName: fluent-bit-efs
containers:
- name: aws-for-fluent-bit
image: amazon/aws-for-fluent-bit:latest
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
- name: fluent-bit-config-efs
mountPath: /fluent-bit/etc/
- name: mnt
mountPath: /mnt
readOnly: true
- name: persistent-storage
mountPath: /data
resources:
limits:
memory: 500Mi
requests:
cpu: 500m
memory: 100Mi
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: fluent-bit-config-efs
configMap:
name: fluent-bit-config-efs
- name: mnt
hostPath:
path: /mnt
- name: persistent-storage
persistentVolumeClaim:
claimName: efs-central-logging-claim
4.4. Launch the DaemonSet by executing:
$ kubectl apply -f file-fluent-bit-daemonset.yaml
4.5. Verify that the pod was successfully deployed by running:
$ kubectl get pod -n fluent-bit-efs-demo
The following is the output from the preceding command:
NAME READY STATUS RESTARTS AGE
fluentbit-tmrqz 1/1 Running 0 28s
4.6. Verify the logs by running:
$ kubectl logs ds/fluentbit
4.7. You can verify that the Amazon EFS file system was mounted successfully on the pod by running:
$ kubectl exec -ti -n fluent-bit-efs-demo fluentbit-w5xfs – bash
# df -h /data
Filesystem Size Used Avail Use% Mounted on
127.0.0.1:/ 8.0E 32G 8.0E 1% /data
5. Deploy an NGINX application
5.1. Copy the following code to a nginx.yaml
file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
app.kubernetes.io/name: nginx
spec:
replicas: 4
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.17
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
targetPort: 80
selector:
app: nginx
5.2. Deploy the application by running:
$ kubectl apply -f nginx.yaml
5.3. Verify that your NGINX pods are running:
$ kubectl get pods
6. Validate the logs on Amazon EFS
6.1. Generate some load for the NGINX containers:
$ kubectl patch svc nginx -p '{"spec": {"type": "LoadBalancer"}}'
$ nginxurl=$(kubectl get svc/nginx -o json | jq .status.loadBalancer.ingress[].hostname -r)
$ for i in {1..100}; do echo "Request $i"; sleep 5; curl -s $nginxurl > /dev/null; don
You can see the NGINX logs along with other container logs written to your Amazon EFS file system:
$ kubectl exec -ti -n fluent-bit-efs-demo fluentbit-vs97m -- ls -lt /data
total 40
-rw-r--r-- 1 1001 1001 580 Aug 11 20:44 kube.var.log.containers.nginx-798dcc9989-7vtz6_default_nginx-805964a8a13d6d6ae0f74f9e66db482ed0e7910fed9974902fcb55c0f700f31d.log
-rw-r--r-- 1 1001 1001 580 Aug 11 20:44 kube.var.log.containers.nginx-798dcc9989-hdwkt_default_nginx-f960e074ac5959156fd307c299417694387abb189d5ac8206f9b1a14c0f78d3f.log
-rw-r--r-- 1 1001 1001 13634 Aug 11 20:44 kube.var.log.containers.efs-csi-node-fp84p_kube-system_efs-plugin-a580a3a6ee76fed5ed026d90ff45f241256606591a56123bbd1653f78502d78f.log
-rw-r--r-- 1 1001 1001 1361 Aug 11 20:41 kube.var.log.containers.kube-proxy-qng48_kube-system_kube-proxy-9a763de589762f9f2ae27fe5182f84f4a63f3fa7e0d3b4f7f06f655734a0d4fb.log
-rw-r--r-- 1 1001 1001 9828 Aug 11 20:38 kube.var.log.containers.fluentbit-vs97m_fluent-bit-
efs-demo_aws-for-fluent-bit-e0719c8a7d7ed8c56d7b24a21b2846bebee0a6fecbb48ed2eeda4fa5bf289a48.log
Tail one of the NGINX log files in a new window. You should see the requests coming from the load you are generating using curl:
$ kubectl exec -ti -n fluent-bit-efs-demo fluentbit-vs97m -- tail -f /data/kube.var.log.containers.nginx-798dcc9989-7vtz6_default_nginx-805964a8a13d6d6ae0f74f9e66db482ed0e7910fed9974902fcb55c0f700f31d.log
kube.var.log.containers.nginx-798dcc9989-7vtz6_default_nginx-805964a8a13d6d6ae0f74f9e66db482ed0e7910fed9974902fcb55c0f700f31d.log: [1597178652.000000, {"remote":"192.168.47.213","host":"-","user":"-","method":"GET","path":"/","code":"200","size":"612","referer":"-","agent":"curl/7.61.1"}]
kube.var.log.containers.nginx-798dcc9989-7vtz6_default_nginx-805964a8a13d6d6ae0f74f9e66db482ed0e7910fed9974902fcb55c0f700f31d.log: [1597178681.000000, {"remote":"192.168.47.213","host":"-","user":"-","method":"GET","path":"/","code":"200","size":"612","referer":"-","agent":"curl/7.61.1"}]
6.2. Check the Amazon EFS file system metrics. You can see I/O activity on your file system:
Cleaning up
If you no longer need the Amazon EKS cluster and Amazon EFS file system delete them by running:
$ eksctl delete cluster ${CLUSTER_NAME}$ aws efs create-file-system –file-system-id
$ aws efs create-file-system –file-system-id <your EFS file system ID>
If you were using an existing Amazon EKS cluster and must clean up the individual components created during the demo, run the following command:
$ kubectl delete -f <filename>.yaml
Summary
In this blog, I covered how to use Amazon EFS file system to persist application logs from containers running on Amazon EKS or self-managed Kubernetes clusters. I explored two different use cases:
- Use case 1: Persist your application logs directly on an Amazon EFS file system when default stdout cannot be used. This use case applies to:
- Applications running on AWS Fargate for Amazon EKS. Fargate requires applications to write logs to a file system instead of stdout.
- Traditional applications that are containerized and need the ability to write application logs to a file.
By using Amazon EFS, you can persist your application logs from your AWS Fargate or Amazon EKS containers. You can then use fluent bit to collect these logs and forward it to your own log management server or external sources like Amazon CloudWatch, Elasticsearch, etc.
- Use case 2: Persist your container logs centrally on an Amazon EFS file system using Fluent Bit file plugin.
When you are routing your container logs using Fluent Bit to external sources like Elasticsearch for centralized logging, there could be risk of losing the logs when these external sources are under heavy load, or must be restarted. Storing these logs on EFS provides developers and operations teams peace of mind, as they know a copy of their logs are available on EFS.
Thank you for reading this blog post. Please leave a comment if you have any questions or feedback.