Containers
Streaming logs from Amazon EKS Windows pods to Amazon CloudWatch Logs using Fluentd
Containers are a method of operating system virtualization that allow you to run an application and its dependencies in resource-isolated processes. Containers allow you to easily package an application’s code, configurations, and dependencies into easy-to-use building blocks that deliver environmental consistency, operational efficiency, developer productivity, and version control.
Using Windows containers allows you to get all the benefits previously quoted but also migrate legacy apps running on an unsupported operational system like Windows Server 2003, 2008, and 2008 R2, which by now may expose the entire environment to security threats and non-compliance with security rules.
When running containers on AWS, you have two choices to make. First, you choose whether you want to manage servers. You choose AWS Fargate if you want serverless compute for containers and Amazon EC2 if you need control over the installation, configuration, and management of your compute environment. Second, you choose which container orchestrator to use: Amazon Elastic Container Service (ECS) or Amazon Elastic Kubernetes Service (EKS). Check out more on containers here.
Meanwhile, migrating to a new platform requires having the right tool for the right job. This blog post address how to stream IIS logs generated in the Windows pods to Amazon CloudWatch Logs as a way to centralize log.
How are we going to achieve this?
Prerequisites and assumptions:
- Amazon EKS cluster (1.14 or newer) up and running. Step by step.
- Launched Amazon EKS Windows worker nodes. Step by step.
- Amazon EKS Windows worker nodes are running on Windows Server 2019.
- EC2 Windows instance for build Docker container image using the same AMI as the worker nodes.
- Created Amazon Elastic Container Registry (ECR). Step by step.
- You have properly installed and configured the AWS CLI at least version 1.18.17, eksctl, and kubectl.
In this blog, we are going to do the following tasks:
- Check that Windows worker nodes are up and running.
- Associate an OIDC provider to the Amazon EKS cluster.
- Create IAM policy, role, and Kubernetes Namespace to be used on the service account.
- Create a Windows container image containing IIS and LogMonitor.
- Create a Windows container image containing Fluentd
- Deploy Windows container image containing Fluentd as a Daemonset
- Deploy Windows container image containing IIS and LogMonitor.
- Access the IIS pods to generate logs. (Optional)
- Checking logs on Amazon CloudWatch Logs.
1. Check that Windows worker nodes are up and running.
1.1 Check if your Windows worker nodes return as Ready. Execute the following command:
1.2 You must be sure that your EKS Windows cluster has the pods vpc-admission-webhook and vpc-resource-controller return as Running. Execute the following command:
2. Associate an OIDC provider to the Amazon EKS cluster.
2.1 To configure IAM roles for service accounts on Amazon EKS, it requires to associate an OIDC to the Amazon EKS cluster. Check if your Amazon EKS cluster supports OIDC executing the following command:
Replace the cluster_name with your Amazon EKS cluster name.
Replace the region-id with your Amazon EKS cluster name.
Output:
2.2 If you need to create the association with OIDC, execute the following command:
- Replace the cluster_name with your Amazon EKS cluster name.
- Replace region-id with the Region on which you Amazon EKS cluster was launched.
3. Create IAM policy, role, and Kubernetes Namespace to be used on the service account.
The IAM service account must have an attached policy containing Amazon CloudWatch permissions, which allows the Amazon EKS cluster to create, describe, and put log events into the log stream.
3.1 Create a JSON file containing the following permission to be used as the IAM policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "EKSFluentdCW",
"Effect": "Allow",
"Action": [
"logs:CreateLogStream",
"logs:DescribeLogGroups",
"logs:DescribeLogStreams",
"logs:CreateLogGroup",
"logs:PutLogEvents"
],
"Resource": "*"
}
]
}
3.2 Create the IAM policy, execute the following command:
- Replace the MyPolicy with your policy desired name.
- Replace File Path with the JSON file path in for the policy.
3.3 Kubernetes namespaces provide a scope for names and organize your workload inside the cluster. Names of resources need to be unique within a namespace, but not across namespaces. Every resource in Kubernetes can only be in one namespace. For this blog post, create the namespace called amazon-cloudwatch with the above YAML file:
3.4 Create the IAM service account and attached the policy previously created. Execute the following command:
- Replace the cluster_name with your Amazon EKS cluster name.
- Replace PolicyARN with your policy ARN.
- Replace region-id with the Region on which your Amazon EKS cluster was launched.
- The command above only works if you previously created your Amazon EKS cluster using eksctl. If you received the following message: cluster was not created with eksctl, then use the instructions on the AWS Management Console or AWS CLI tabs. Also, you must have the Kubernetes service account already created in the amazon-cloudwatch namespaces, to create the service account, run the following command:
4. Create a Windows container image containing LogMonitor
4.1 To test the functionality explained on this blog, create a Windows container image containing IIS and LogMonitor. For more instructions on how to use LogMonitor, access the official GitHub repository.
In the example below, there is a Dockerfile to build the Windows container image containing IIS and LogMonitor.
LogMonitorConfig.json
This sample LogMonitorConfig configuration retrieves all the logs files with the extension .log saved on C:\inetpub\logs and sub directories, including the IIS access logs.
{
"LogConfig": {
"sources": [
{
"type": "EventLog",
"startAtOldestRecord": true,
"eventFormatMultiLine": false,
"channels": [
{
"name": "system",
"level": "Error"
}
]
},
{
"type": "File",
"directory": "c:\\inetpub\\logs",
"filter": "*.log",
"includeSubdirectories": true
},
{
"type": "ETW",
"providers": [
{
"providerName": "IIS: WWW Server",
"ProviderGuid": "3A2A4E84-4C21-4981-AE10-3FDA0D9B0F83",
"level": "Information"
},
{
"providerName": "Microsoft-Windows-IIS-Logging",
"ProviderGuid": "7E8AD27F-B271-4EA2-A783-A47BDE29143B",
"level": "Information",
"keywords": "0xFF"
}
]
}
]
}
}
As the build completes, push the image to your ECR registry.
5. Create a Windows container image containing Fluentd
5.1 To aggregate logs from Kubernetes pods, more specific the Docker logs, we will use Windows servercore as base image, Fluentd RubyGems to parse and rewrite the logs, aws-sdk-cloudwatchlogs RubyGems for Amazon CloudWatch Log to authentication and communication with AWS services.
The Dockerfile will build the container image containing all the requirements. This Dockerfile uses a feature from Docker called Multi Stage Build, reducing the final container size in approximate 600MB.
As the build completes, push the image to your ECR registry. If you are facing freezing during the build process, run the following command on the build server to disable Docker real-time monitoring during the build phase.
5.2 To configure the Fluentd, we need to inject in the container the files fluent.conf and containers.conf. As we are using Kubernetes, we’ll use an object called ConfigMap to mount the configuration files directly inside the pod. ConfigMaps allow you to decouple application configuration from application code. You can change your configuration without rebuilding container image.
---
apiVersion: v1
kind: ConfigMap
metadata:
name: fluentd-windows
namespace: amazon-cloudwatch
labels:
k8s-app: fluentd-windows
data:
AWS_REGION: region-id
CLUSTER_NAME: cluster_name
fluent.conf: |
<match fluent.**>
@type null
</match>
@include containers.conf
containers.conf: |
<source>
@type tail
@id in_tail_container_logs
path /var/log/containers/*.log
exclude_path ["/var/log/containers/fluentd*"]
pos_file /var/log/fluentd-containers.log.pos
tag k8s.*
read_from_head true
<parse>
@type "json"
time_format %Y-%m-%dT%H:%M:%S.%NZ
</parse>
</source>
<filter **>
@type record_transformer
@id filter_containers_stream_transformer
<record>
stream_name ${tag_parts[4]}
</record>
</filter>
<match k8s.**>
@type cloudwatch_logs
@id out_cloudwatch_logs_containers
region "#{ENV.fetch('AWS_REGION')}"
log_group_name "/EKS/#{ENV.fetch('CLUSTER_NAME')}/Windows"
log_stream_name_key stream_name
remove_log_stream_name_key true
auto_create_stream true
<buffer>
flush_interval 5
chunk_limit_size 2m
queued_chunks_limit_size 32
retry_forever true
</buffer>
</match>
- Replace the region-id value with the Region on which your Amazon EKS cluster was launched.
- Replace the cluster_name value with your Amazon EKS cluster name.
6. Deploy Windows container image containing Fluentd as a Daemonset
A DaemonSet ensures that all (or some) nodes run a copy of a pod. As nodes are added to the cluster, pods are also added to them. In order to make that sure all Windows worker nodes have a copy of the Windows Fluentd pod, we are going to deploy a DaemonSet using the deployment file mentioned below.
6.1 Create a deployment file using the following content:
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: fluentd-windows
namespace: amazon-cloudwatch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: fluentd-windows
namespace: amazon-cloudwatch
rules:
- apiGroups:
- ""
resources:
- pods
- namespaces
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: fluentd-windows
roleRef:
kind: ClusterRole
name: fluentd-windows
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: fluentd-windows
namespace: amazon-cloudwatch
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd-windows
namespace: amazon-cloudwatch
labels:
k8s-app: fluentd-windows
spec:
selector:
matchLabels:
name: fluentd-windows
template:
metadata:
labels:
name: fluentd-windows
spec:
serviceAccount: fluentd-windows
serviceAccountName: fluentd-windows
# Because the fluentd requires to write on /etc/fluentd/ but we
# mount the config using a configmap which is read-only,
# this initContainer needs to be used to copy
# from Read Only folder to Read Write folder.
initContainers:
- name: copy-fluentd-config
image: mcr.microsoft.com/windows/servercore:ltsc2019
command: ['powershell', '-command', 'cp /etc/temp/*.conf /etc/fluent/']
volumeMounts:
- name: fluentdconftemp
mountPath: /etc/temp/
- name: fluentdconf
mountPath: /etc/fluent
containers:
- name: fluentd-windows
image: Fluentd-ECRrepository/tag
env:
- name: AWS_REGION
valueFrom:
configMapKeyRef:
name: fluentd-windows
key: AWS_REGION
- name: CLUSTER_NAME
valueFrom:
configMapKeyRef:
name: fluentd-windows
key: CLUSTER_NAME
resources:
limits:
memory: 2Gi
requests:
cpu: 100m
memory: 1Gi
volumeMounts:
- name: fluentdconftemp
mountPath: /etc/temp/
- name: fluentdconf
mountPath: /etc/fluent
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: C:\ProgramData\Docker\containers
readOnly: true
nodeSelector:
beta.kubernetes.io/os: windows
terminationGracePeriodSeconds: 30
volumes:
- name: fluentdconftemp
configMap:
name: fluentd-windows
- name: varlog
hostPath:
path: C:\var\log
- name: varlibdockercontainers
hostPath:
path: C:\ProgramData\Docker\containers
- name: fluentdconf
emptyDir: {}
- Replace the Fluentd-ECRrepository/tag image with the ECR address and tag generated in the step 5.1
6.2 Deploy the pod using the following command:
You must be sure your Fluentd pods return as Running prior to next step.
- For the pods to reach the state Running, it can take approximate 7 minutes, due to the container size. You can check the status with this command:
7. Deploy Windows container image containing IIS and LogMonitor
The deployment will ensure that the number of desired replicas (pods), 2 in this case, will be running in your Windows worker nodes based on the nodeSelector attribute “beta.kubernetes.io/os: windows.” The service will create a virtual IP inside the Kubernetes to load balance traffic between the pods.
7.1 Create a deployment file using the following content:
- Replace the IIS-ECRrepository/tag image with the ECR address and tag generated in the step 4.1
7.2 Deploy the pod using the following command:
7. Access the IIS pods to generate logs (optional)
7.1 This is an optional step. You can wait for original traffic to hit your container or you can force it to quickly see the results. In order to generate logs directly into the container, execute the following command:
7.2 From inside the container, execute the following command:
8. Checking logs on Amazon CloudWatch Logs
8.1 To check if the logs have successfully streamed to the log streams. Access the Amazon CloudWatch console and click in the log group /EKS/cluster_name/Windows and the desired log stream, which is mapped to your pod.
8.2 As you can see, the IIS logs are now streaming into the log stream.
Conclusion
Using Amazon CloudWatch Logs to centralize all your Windows pods logs, allow administrator quickly identify application issues, gain operational visibility and insights for the business.