AWS Open Source Blog

Using the K3s Kubernetes distribution in an Amazon EKS CI/CD pipeline

Modern microservices application stack, CI/CD pipeline, Kubernetes as orchestrator, hundreds or thousands of deployments per day—this all sounds good, until you realize that your Kubernetes development or staging environments are messed up by these deployments, and changes done by one developer team are affecting your developer team’s Kubernetes environment. In this post, we will walk through why these external changes affect our Kubernetes environments and how to prevent it.

This problem happens because usually we have various code checks and image scans in our pipeline before pushing images to the repository and deploying our resources; however, there are no proper integration or unit tests happening inside the pipeline itself, as there is no Kubernetes cluster available inside the pipeline. Effectively, we are testing our changes after deployment.

One solution is to provision a clean Kubernetes cluster during each build, test changes, and then tear it down. However, this is time consuming and not cost effective. Instead, we can solve this problem using an open source, lightweight K3s Kubernetes distribution from Rancher, with Amazon Elastic Kubernetes Service (Amazon EKS) and AWS CodePipeline.

What is K3s?

K3s is an open source, lightweight, and fully compliant Kubernetes distribution that is less than 100 MB in size and designed for IoT, Edge, and CI/CD environments. Startup time only takes about 40 seconds.

What is even more interesting, especially for CI/CD use case, is that we can run K3s inside a Docker container. Rancher provides another tool called k3d, which is a lightweight wrapper to run K3s in a Docker container. In this case, the package is about 10 MB and startup time is even faster at around 15-20 seconds.

Let’s get started and learn how to implement this solution.

Prerequisites

To complete this tutorial, we need:

Provision Amazon EKS cluster

There are many ways to provision, including by using the AWS Management Console, AWS CLI, and others. We recommend using eksctl, but use whatever way you prefer, and modify the node type and region to your preference. Cluster provisioning typically takes around 15 minutes.

eksctl create cluster \
--name k3s-lab \
--version 1.16 \
--nodegroup-name k3s-lab-workers \
--node-type t2.medium \
--nodes 2 \
--alb-ingress-access \
--region us-west-2

For the purpose of this exercise, we use the t2.medium instance family. Remember to use the appropriate instance family type if you are spinning up an Amazon EKS cluster in the production environment.

After the cluster is provisioned, we verify that it is up and that kubectl is properly configured, using the command:

kubectl get nodes

Our output should look like this:

NAME                             STATUS   ROLES     AGE       VERSION
ip-192-168-12-121.ec2.internal   Ready    <none>    82s       v1.16.8-eks-e16311
ip-192-168-38-246.ec2.internal   Ready    <none>    80s       v1.16.8-eks-e16311

Set up AWS CodePipeline

We set up CodePipeline by doing the following:

1. Set the ACCOUNT_ID variable:

ACCOUNT_ID=$(aws sts get-caller-identity --output text --query 'Account')

2. In CodePipeline, we use AWS CodeBuild to deploy a sample Kubernetes service. This requires an AWS Identity and Access Management (IAM) role capable of interacting with the Amazon EKS cluster. In this step, we are going to create an IAM role and add an inline policy to use in the CodeBuild stage. This policy will allow AWS CodeBuild to interact with the Amazon EKS cluster via kubectl. Execute the below commands to create the role and attach the policy.

TRUST="{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"AWS\": \"arn:aws:iam::${ACCOUNT_ID}:root\" }, \"Action\": \"sts:AssumeRole\" } ] }"
echo '{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "eks:Describe*", "Resource": "*" } ] }' > /tmp/iam-role-policy
aws iam create-role --role-name EksWorkshopCodeBuildKubectlRole --assume-role-policy-document "$TRUST" --output text --query 'Role.Arn'
aws iam put-role-policy --role-name EksWorkshopCodeBuildKubectlRole --policy-name eks-describe --policy-document file:///tmp/iam-role-policy

3. Now that we have the IAM role created, we will add the role to the aws-auth ConfigMap for the Amazon EKS cluster. Once included, this new role allows kubectl to interact with the Amazon EKS cluster via the IAM role.

ROLE="    - rolearn: arn:aws:iam::$ACCOUNT_ID:role/EksWorkshopCodeBuildKubectlRole\n      username: build\n      groups:\n        - system:masters"
kubectl get -n kube-system configmap/aws-auth -o yaml | awk "/mapRoles: \|/{print;print \"$ROLE\";next}1" > /tmp/aws-auth-patch.yml 
kubectl patch configmap/aws-auth -n kube-system --patch "$(cat /tmp/aws-auth-patch.yml)"

4. Next we will fork the sample Kubernetes service so that we can modify the repository and trigger builds. Log in to GitHub and fork the sample service to the account of choice. Refer to the sample Kubernetes service for more information. After the repository is forked, clone it to the local environment so we can work with files using our favorite IDE or text editor.

git clone https://github.com/YOUR-USERNAME/eks-workshop-sample-api-service-go.git

5. In order for CodePipeline to receive callbacks from GitHub, we must generate a personal access token. (For more information, see the CodePipeline documentation.) Once created, an access token is stored in a secure enclave and reused. This step is only required during the first run, or when there is a need to generate new keys.

6. Next we will create the CodePipeline using AWS CloudFormation. Navigate to the AWS Management Console to create the CloudFormation stack. After the console is open, enter the GitHub user name, personal access token (created in the previous step), and Amazon EKS cluster name (k3s-lab). Then, select the acknowledge box and select Create stack. This step takes about 10 minutes to complete.

After the CodePipeline creation, we can check the status in the CodePipeline console and verify that the deployment was applied to our cluster using the command:

kubectl describe deployment hello-k8s

Add k3d to AWS CodePipeline

Now let’s modify the buildspec.yml file in our forked repository and add unit testing using k3d.

We will walk through the required modifications, which can be done manually. Or, alternatively, a full buildspec.yml file is provided at the end of this section.

1. Install k3d in the CodeBuild environment.

- curl -sS https://raw.githubusercontent.com/rancher/k3d/main/install.sh | TAG=v1.7.0 bash

2. Create the k3 cluster during the build phase and wait 20 seconds for the cluster to spin up.

- k3d create
- sleep 20

3. Configure kubectl for the k3 cluster.

- export KUBECONFIG="$(k3d get-kubeconfig --name='k3s-default')"

4. By default, Amazon EKS cluster nodes are configured by eksctl to have access to pull images from the Amazon Elastic Container Registry (Amazon ECR) image repository. Non-Amazon EKS clusters, however, require additional configuration for this. Find the instructions for this configuration in the documentation. Because there are a few steps, I’ve moved them to a separate script (create_secret.sh), and call it inside buildspec.yml file.

- ./create_secret.sh

Add the file create_secret.sh to the working folder of the forked repository with the following context:

SECRET_NAME=$AWS_REGION-ecr-registry
 TOKEN=`aws ecr get-authorization-token --output text --query authorizationData[].authorizationToken | base64 -d | cut -d: -f2`
 echo "ENV variables setup done."
 kubectl create secret docker-registry $SECRET_NAME \
 --docker-server=https://$REPOSITORY_URI \
 --docker-username=AWS \
 --docker-password="${TOKEN}" \
 --docker-email=DUMMY_DOCKER_EMAIL
 kubectl patch serviceaccount default -p '{"imagePullSecrets":[{"name":"'$SECRET_NAME'"}]}'

5. Deploy our pipeline application resources to the k3d cluster and wait 20 seconds for resources to come up.

- kubectl apply -f hello-k8s.yml
- sleep 20

Configure testing

During this step, we run our unit and/or integration testing. For this example, we’ve provided a simple script to hit the endpoint of our service. We can deploy other necessary microservices or services for integration testing from our stack.

- ./unit_test.sh

Add the file unit_test.sh to the working folder of the forked repository with the following context.

#!/bin/sh
set -e
api_host=$(kubectl get svc hello-k8s -o json | jq -r .status.loadBalancer.ingress[].ip)
curl -m 2 $api_host

Check whether testing was successful

The last step is to check whether testing was successful and to deploy our application to the Amazon EKS cluster. If testing failed, we fail our CodePipeline and do not deploy to Amazon EKS. CodeBuild has a built-in variable CODEBUILD_BUILD_SUCCEEDING to indicate the status of build phase. Let’s use it in our code.

- bash -c "if [ /"$CODEBUILD_BUILD_SUCCEEDING/" == /"0/" ]; then exit 1; fi" 
- echo Build stage successfully completed on `date`

buildspec.yml

---
version: 0.2
phases:
  install:
    commands:
      - curl -sS -o aws-iam-authenticator https://amazon-eks.s3-us-west-2.amazonaws.com/1.10.3/2018-07-26/bin/linux/amd64/aws-iam-authenticator
      - curl -sS -o kubectl https://amazon-eks.s3-us-west-2.amazonaws.com/1.14.6/2019-08-22/bin/linux/amd64/kubectl
      - curl -sS https://raw.githubusercontent.com/rancher/k3d/main/install.sh | TAG=v1.7.0 bash
      - chmod +x ./kubectl ./aws-iam-authenticator
      - export PATH=$PWD/:$PATH
      - apt-get update && apt-get -y install jq python3-pip python3-dev && pip3 install --upgrade awscli
  pre_build:
      commands:
        - TAG="$REPOSITORY_NAME.$REPOSITORY_BRANCH.$ENVIRONMENT_NAME.$(date +%Y-%m-%d.%H.%M.%S).$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | head -c 8)"
        - sed -i 's@CONTAINER_IMAGE@'"$REPOSITORY_URI:$TAG"'@' hello-k8s.yml
        - $(aws ecr get-login --no-include-email)
  build:
    commands:
      - docker build --tag $REPOSITORY_URI:$TAG .
      - docker push $REPOSITORY_URI:$TAG
      # Creating k3d cluster
      - k3d create
      # Waiting for cluster creation for 20 seconds
      - sleep 20
      # Configuring kubectl for k3d cluster
      - export KUBECONFIG="$(k3d get-kubeconfig --name='k3s-default')"
      # Creating secret as per https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#add-image-pull-secret-to-service-account
      # to enable k3d cluster pull images from ECR
      - ./create_secret.sh
      # Applying our service and deployment manifest
      - kubectl apply -f hello-k8s.yml
      # Waiting for pods and service to come up
      - sleep 20
      # Running unit test
      - ./unit_test.sh
  post_build:
    commands:
      # Checking if build phase including unit test completed successfully, if not we don't proceed with deployment
      - bash -c "if [ /"$CODEBUILD_BUILD_SUCCEEDING/" == /"0/" ]; then exit 1; fi"
      - echo Build stage successfully completed on `date`
      - CREDENTIALS=$(aws sts assume-role --role-arn $EKS_KUBECTL_ROLE_ARN --role-session-name codebuild-kubectl --duration-seconds 900)
      - export KUBECONFIG=$HOME/.kube/config
      - export AWS_ACCESS_KEY_ID="$(echo ${CREDENTIALS} | jq -r '.Credentials.AccessKeyId')"
      - export AWS_SECRET_ACCESS_KEY="$(echo ${CREDENTIALS} | jq -r '.Credentials.SecretAccessKey')"
      - export AWS_SESSION_TOKEN="$(echo ${CREDENTIALS} | jq -r '.Credentials.SessionToken')"
      - export AWS_EXPIRATION=$(echo ${CREDENTIALS} | jq -r '.Credentials.Expiration')
      - aws eks update-kubeconfig --name $EKS_CLUSTER_NAME
      - kubectl apply -f hello-k8s.yml
      - printf '[{"name":"hello-k8s","imageUri":"%s"}]' $REPOSITORY_URI:$TAG > build.json
artifacts:
  files: build.json

After all the changes are complete and the new files are in our local forked repository, we need to commit the changes so CodePipeline can pick them up and apply them to our pipeline.

git add .
git commit -m "k3d modified pipeline"
git push

After we push the changes, we can go to the CodePipeline console and check the pipeline status and logs.

Screenshot of the AWS CodePipeline console where the user is checking the pipeline status and logs.

Navigate to Details in the Build section. Here, we can inspect what was happening during our pipeline run under Build Logs.

Screenshot of terminal displaying what was happening during a pipeline run.

Cleaning up

To avoid incurring future charges, we need to perform a few clean-up steps.

1. Delete the CloudFormation stack created for CodePipeline. Open the CloudFormation management console, select the box next to the eksws-codepipeline stack, select Delete, and then confirm deletion in the pop-up window.

Screenshot of AWS CloudFormation console showing users how to delete a specific stack.

2. Delete the Amazon ECR repository. Open the Amazon ECR management console, and select the box next to repository name starting with eksws. Select Delete, and then confirm deletion.

Screenshot of the ECR management console showing how a user can delete an ECR repository.

3. Empty and delete the Amazon S3 bucket used by CodeBuild for build artifacts. The bucket name begins with eksws-codepipeline.

Select the bucket, then select Empty. Select Delete to finish deleting the bucket.

Screenshot showing the process to delete an S3 bucket as part of this example.

4. Finally, delete the Amazon EKS cluster using the command:

eksctl delete cluster --name=k3s-lab

Conclusion

In this blog post, we explored how to add unit and integration testing to an Amazon EKS CI/CD pipeline, using the open source, lightweight K3s Kubernetes distribution. If you are using different CI/CD tooling for your Amazon EKS deployments, you can easily incorporate K3s there as well.

Get involved

You can join the open source K3s community, where you can ask questions, collaborate and contribute to the project.

 

TAGS:
Petro Kashlikov

Petro Kashlikov

Petro Kashlikov is Technical Account Manager for AWS. Petro is also passionate about Containers and works with AWS customers to design, deploy, and manage their AWS workloads/architectures. In his spare time, he enjoys traveling, biking, skiing and other active sports.