Containers

Run Amazon EKS on RHEL Worker Nodes with IPVS Networking

Introduction

Amazon Elastic Kubernetes Services (Amazon EKS) provides excellent abstraction from managing the Kubernetes control plane and data plane nodes that are responsible for operating and managing a cluster. AWS offers managed Amazon Machine Images, or AMIs, for Amazon Linux 2, Bottlerocket, and Windows Server. Many customers have requirements, or simply prefer, to use Red Hat Enterprise Linux (RHEL) for all their Linux machines. Although RHEL isn’t officially supported for Amazon EKS, customers can build Amazon EKS worker nodes on RHEL using the scripts in this post.

An Amazon EKS cluster has built-in add-ons for networking: The Amazon Virtual Private Cloud (Amazon VPC) container network interface (CNI) plugin for Kubernetes, CoreDNS and kube-proxy components. Each of these components plays a vital role for Kubernetes networking. Of these components, kube-proxy is responsible for maintaining the network rules that allow network communication to your Pods from network sessions inside or outside of your cluster. IPVS and iptables are two proxy modes for kube-proxy on Linux, with the default being iptables. While it is most common and acceptable to use the default mode, there are known limitations on iptables that can ultimately impacts a cluster’s performance.

With RHEL 8.6, Red Hat transitioned from iptables to nftables for network filtering. This presents a challenge when running Kubernetes components such as the kube-proxy daemonset, which at the time of writing doesn’t support nftables on RHEL. One work around for this challenge is to use IP Virtual Server (IPVS).

IPVS can enhance the performance of a Kubernetes cluster as it was created for load balancing. It uses hash tables for efficient data structuring, making it more suitable for use in larger clusters. To enable IPVS, you must configure this at the Kubernetes cluster level as well as on the individual worker nodes. In this post, we’ll show you how to configure your Amazon EKS cluster for IPVS networking. We’ll also demonstrate how to build RHEL 8.x or RHEL 9.x worker nodes and join them to your cluster.

Note: There is a network routing issue caused by the nm-cloud-setup service that comes preinstalled on RHEL machines. In this guide, we will disable this service and reboot the EC2 instances as recommended by Red Hat in the following KB article.

Prerequisites

This walkthrough requires the following prerequisites (versions of each are listed as of the time of writing).

Walkthrough

First, we’ll create an Amazon EKS cluster without any workers nodes and configure IPVS networking. Next, we’ll build out the Amazon EKS worker nodes on RHEL. And finally, we’ll join the Amazon EKS worker nodes to the cluster.

Amazon EKS cluster creation and configuration

Create an Amazon EKS cluster without any nodes.

eksctl create cluster --name <CLUSTER_NAME> --without-nodegroup --version=<VERSION>

Edit the kube-proxy-config ConfigMap to use IPVS. This can be done using the Amazon EKS VPC CNI add-on using instructions from this post.

kubectl -n kube-system edit cm kube-proxy-config

Set the ipvs → scheduler value to your preferred method. Example: rr

  • rr: round-robin
  • lc: least connection
  • dh: destination hashing
  • sh: source hashing
  • sed: shortest expected delay
  • nq: never queue

Set the mode to “ipvs”. Save and exit.

    hostnameOverride: ""
    iptables:
      masqueradeAll: false
      masqueradeBit: 14
      minSyncPeriod: 0s
      syncPeriod: 30s
    ipvs:
      excludeCIDRs: null
      minSyncPeriod: 0s
      scheduler: "rr"
      syncPeriod: 30s
    kind: KubeProxyConfiguration
    metricsBindAddress: 0.0.0.0:10249
    mode: "ipvs"
    nodePortAddresses: null
    oomScoreAdj: -998
    portRange: ""
    udpIdleTimeout: 250ms

With IPVS enabled at the cluster level, your worker nodes need to have the proper kernel modules installed and loaded. This is taken care of by the installation script referenced in the next portion of the walkthrough, but you can enable these modules manually if you don’t want to run the installation script.

# Install ipvsadm
$ sudo yum install -y ipvsadm

# Verify no entries are present
$ sudo ipvsadm -L

# Load kernel modules
$ sudo modprobe ip_vs 
$ sudo modprobe ip_vs_rr
$ sudo modprobe ip_vs_wrr 
$ sudo modprobe ip_vs_sh
$ sudo modprobe nf_conntrack

Building the RHEL Worker Node AMI

We’ll use Packer to build the worker nodes. Packer can be setup on your local workstation with Application Programming Interface (API) keys for AWS credentials. Packer can also be setup on an Amazon Elastic Compute Cloud (Amazon EC2) instance with an AWS Identity and Access Management (AWS IAM) Instance Profile with the necessary permissions, or it can be setup in AWS CloudShell. We recommend using AWS CloudShell because it is free to use and is preconfigured with the same credentials of the user logged into the AWS Management Console.

To setup Packer in AWS CloudShell, you can run the following commands.

sudo yum install -y yum-utils shadow-utils
sudo yum-config-manager --add-repo https://rpm.releases.hashicorp.com/AmazonLinux/hashicorp.repo
sudo yum -y install packer

Next, clone the GitHub repository for building Amazon EKS RHEL AMIs and cd into the directory.

git clone https://github.com/aws-samples/amazon-eks-ami-rhel.git
cd amazon-eks-ami-rhel/

Launch the Packer build process using a Make command similar to one of the ones below, specifying your Kubernetes cluster version as well as any other custom values you desire.

Example basic command:

make 1.28

Example command for building a customized Defense Information Systems Agency (DISA) Security Technical Implementation Guide (STIG) compliant AMI, owned by a specific AWS Account in AWS GovCloud us-gov-east-1 Region, with binaries stored in a private Amazon Simple Storage Service (Amazon S3) bucket, an AWS IAM instance profile attached, and using AWS Systems Manager Session Manager for Packer terminal access:

make 1.28 source_ami_owners=123456789012 source_ami_filter_name=RHEL9_STIG_BASE*2023-04-14* ami_regions=us-gov-east-1 aws_region=us-gov-east-1 binary_bucket_name=my-eks-bucket binary_bucket_region=us-gov-east-1 iam_role=EC2Role pull_cni_from_github=false ssh_interface=session_manager

The build process can take anywhere from 5 to 20 minutes and should provide an output similar to the one below at the end. Copy down the resulting AMI ID for use in the next steps.

Build 'amazon-ebs' finished after 15 minutes 25 seconds.

 ==> Wait completed after 15 minutes 25 seconds

 ==> Builds finished. The artifacts of successful builds are:
 --> amazon-ebs: AMIs were created:
 us-gov-east-1: ami-0c6d588be9ce412ef

Now that you have successfully built a custom AMI, it’s time to join worker nodes to your cluster. In the root of the directory of the GitHub repository cloned at the beginning of the walkthrough, there are bash (Linux) and zsh (MacOS) scripts that you can execute to join nodes to your cluster. These scripts use eksctl to create Amazon EKS-managed node groups. Modify these scripts with any custom tags or labels as desired.

Note: These scripts contain custom Amazon EC2 User Data scripts that disable the nm-cloud-setup service to resolve the issue with this service mentioned earlier in the post.

Script syntax:

./create_nodegroup.sh <CLUSTER_NAME> <AMI_ID> <NODE_GROUP_NAME> <REGION> <KEY_PAIR> <INSTANCE_TYPE> <MIN_GROUP_SIZE> <DESIRED_GROUP_SIZE> <MAX_GROUP_SIZE>

Example command:

./create_nodegroup.sh rhel-cluster ami-0c6d588be9ce412ef rhelnodegroup us-gov-east-1 govcloudkeypair t3.large 5 5 5

At this point, we want to perform some validation of our Amazon EKS cluster. First, we’ll verify our worker nodes have joined the cluster.

$ kubectl get nodes
 NAME                                                STATUS   ROLES    AGE   VERSION
 ip-192-168-138-204.us-gov-east-1.compute.internal   Ready    <none>   46m   v1.28.1-eks-43840fb
 ip-192-168-168-184.us-gov-east-1.compute.internal   Ready    <none>   46m   v1.28.1-eks-43840fb

Next, we’ll verify kube-system namespace pods are running.

$ kubectl -n kube-system get pods
 NAME                       READY   STATUS    RESTARTS   AGE
 aws-node-qqw24             1/1     Running   0          47m
 aws-node-xpbzl             1/1     Running   0          47m
 coredns-76bcb79755-6vfsd   1/1     Running   0          47m
 coredns-76bcb79755-zvbvq   1/1     Running   0          47m
 kube-proxy-454wj           1/1     Running   0          47m
 kube-proxy-tmgln           1/1     Running   0          47m

Note: We want to confirm that coredns is working as expected as this is one of the main issues seen when moving from iptables to nftables or IPVS.

Lastly, we’ll verify IPVS entries are present on one of our worker nodes.

$ sudo ipvsadm -L
 IP Virtual Server version 1.2.1 (size=4096)
 Prot LocalAddress:Port Scheduler Flags
   -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
 TCP  ip-10-100-0-1.us-gov-east-1. rr
   -> ip-192-168-113-81.us-gov-eas Masq    1      0          0
   -> ip-192-168-162-166.us-gov-ea Masq    1      1          0
 TCP  ip-10-100-0-10.us-gov-east-1 rr
   -> ip-192-168-104-215.us-gov-ea Masq    1      0          0
   -> ip-192-168-123-227.us-gov-ea Masq    1      0          0
 UDP  ip-10-100-0-10.us-gov-east-1 rr
   -> ip-192-168-104-215.us-gov-ea Masq    1      0          0
   -> ip-192-168-123-227.us-gov-ea Masq    1      0          0

Note that the first entry is the Transmission Control Protocol (TCP) entry for the Kubernetes ClusterIP service (from kubectl get service kubernetes).

The second and third entries are the TCP and User Datagram Protocol (UDP) entries for the kube-dns service (from kubectl get service kube-dns -n kube-system). The endpoints are the coredns pod IPs.

Cleaning up

To avoid incurring future charges, delete the two AWS CloudFormation Stacks created in the steps above. You’ll need to delete the Managed Node Group stack first, and then you can delete the Amazon EKS cluster stack.

Conclusion

In this post, we showed you how to create Amazon EKS worker nodes that run on RHEL 8 and RHEL 9 Amazon EC2 instances and how to run your Amazon EKS clusters in IPVS networking mode.

Brad Watson

Brad Watson

Brad Watson is a Principal Solutions Architect at Amazon Web Services (AWS). He helps Worldwide Public Sector customers throughout their cloud journeys on AWS. In his spare time, he enjoys spending time with friends and family and watching and playing sports.

David Aiken

David Aiken

David Aiken is a Senior Cloud Infrastructure Architect at Amazon Web Services (AWS). He supports Non-Profit customers within the United States, helping them migrate workloads to AWS. In his spare time, he enjoys camping, working out, and growing the factory.

Insoo Jang

Insoo Jang

Insoo Jang is a Sr. Enterprise Account Engineer at Amazon Web Services (AWS). He supports Worldwide Public Sector customers build, scale, and optimize container workloads on AWS. In his spare time, he enjoys fishing, soccer, and spending time with his family.

Jeff Nelson

Jeff Nelson

Jeff Nelson is a Software Engineer at Amazon Web Services (AWS). He works on the Amazon EKS Networking team, developing features and supporting customers. In his spare time, he spends his days training for triathlons and boating on the lake.