Containers
Cross Amazon EKS cluster App Mesh using AWS Cloud Map
NOTICE: October 04, 2024 – This post no longer reflects the best guidance for configuring a service mesh with Amazon EKS and its examples no longer work as shown. Please refer to newer content on Amazon VPC Lattice.
——–
Overview
In this article, we are going to explore how to use AWS App Mesh across Amazon EKS (EKS) clusters. App Mesh is a service mesh that lets you control and monitor services spanning two clusters deployed in the same VPC. We’ll demonstrate this by using two EKS clusters within a VPC and an App Mesh that spans the clusters using Cloud Map. This example shows how EKS deployments can use AWS Cloud Map for service-discovery when using App Mesh.
We will use two EKS clusters in a single VPC to explain the concept of cross cluster mesh using AWS Cloud Map. The diagram below illustrates the big picture. This is intentionally meant to be a simple example for clarity, but in the real world the App Mesh can span multiple different container clusters like ECS, Fargate, Kubernetes on EC2 etc.
In this example, there are two EKS clusters within a VPC and a mesh spanning both clusters. The setup has AWS Cloud Map services and three EKS Deployments as described below. The front container will be deployed in Cluster 1 and the color containers will be deployed in Cluster 2. The Goal is to have a single Mesh across the clusters using AWS Cloud Map based service discovery,
Clusters
We will spin up two EKS clusters in the same VPC for simplicity and configure a Mesh as we deploy the clusters components.
Deployments
There are two deployments of colorapp, blue and red. Pods of both these deployments are registered behind virtual service colorapp.appmesh-demo.pvt.aws.local. Blue pods are registered with the mesh as colorapp-blue virtual-node and red pods as colorapp-red virtual-node. These virtual-nodes are configured to use AWS Cloud Map as service-discovery, hence the IP addresses of these pods are registered with the AWS Cloud Map service with corresponding attributes. Additionally, a colorapp virtual-service is defined that routes traffic to blue and red virtual-nodes.
Front app acts as a gateway that makes remote calls to colorapp. Front app has single deployment with pods registered with the mesh as front virtual-node. This virtual-node uses colorapp virtual-service as backend. This configures Envoy injected into front pod to use App Mesh’s Endpoint Discovery Service(EDS) to discover colorapp endpoints.
Mesh
App Mesh components will be deployed from one of the two clusters. It does not really matter where you deploy it from. It will have various components deployed. A virtual node per service and a Virtual Service, which will have a router with routes tied (provider) to route traffic between red and blue equally. We will use a custom CRD, mesh controller and mesh inject components that will handle the mesh creation using the standard kubectl. This will auto inject proxy sidecars on pod creation.
Note: You can use native App Mesh API calls instead, to deploy the App Mesh components, if you prefer.
AWS Cloud Map
As we create the mesh, we will use service discovery attributes, which will automatically create the DNS records in the namespace that we have pre-created. The front application in the first cluster will leverage this DNS entry in AWS Cloud Map to talk to the colorapp on the second cluster.
So, Lets get started…
Prerequisites
In order to successfully carry out the base deployment:
- Make sure to have newest AWS CLI installed, that is, version 1.16.268 or above.
- Make sure to have
kubectl
installed, at least version1.11
or above. - Make sure to have
jq
installed. - Make sure to have
aws-iam-authenticator
installed, required for eksctl - Install eksctl, for example, on macOS with
brew tap weaveworks/tap
andbrew install weaveworks/tap/eksctl
, and make sure it’s on at least on version0.1.26
.
Note that this walkthrough assumes throughout to operate in the us-east-1
Region.
Cluster provisioning
Create an EKS cluster with eksctl
using the following command:
Once cluster creation is complete open another tab and create another EKS cluster with eksctl
using the following command:
Note: Use the public and private subnets created as part of cluster2 in this command. See this for more details.
When completed, update the KUBECONFIG
environment variable in each tab according to the eksctl
output, respectively:
Run the following in respective tabs.
You have now setup the two clusters and pointing kubectl to respective clusters. Congratulations.
Deploy App Mesh custom components
In order to automatically inject App Mesh components and proxies on pod creation, we need to create some custom resources on the clusters. We will use helm for that. We need install tiller on both the clusters and then use helm to run the following commands on both clusters for that.
Code base
Install Helm
Install tiller
Using helm requires a server-side component called tiller installed on the cluster. Follow the instructions in the documentation to install tiller on both the clusters.
Verify tiller install
Install App Mesh Components
Run the following set of commands to install the App Mesh controller and Injector components.
We are now ready to deploy our front and colorapp applications to respective clusters along with the App Mesh, which will span both clusters.
Deploy services and mesh constructs
1. You should be in the walkthrough/howto-k8s-cross-cluster folder, all commands will be run from this location.
2. Your account id:
3. Region, e.g., us-east-1
4. ENVOY_IMAGE environment variable is set to App Mesh Envoy, see Envoy.
5. VPC_ID environment variable is set to the VPC where Kubernetes pods are launched. VPC will be used to set up
private DNS namespace in AWS using create-private-dns-namespace API. To find out VPC of EKS cluster, you can
use aws eks describe-cluster
. See below for reason why AWS Cloud Map PrivateDnsNamespace is required.
6. CLUSTER environment variables to export kube configuration
Deploy
Verify deployment
On Cluster 1
On Cluster 2
Note that 3/3 on the application pods indicate that the sidecar containers have been injected.
Note also, that the mesh components like Virtual Service, the Router and the routes and the virtual nodes have been created, as well. You may verify this by going to the App Mesh console.
Verify AWS Cloud Map
As a part of deploy command we have pushed the images to ECR, created a namespace in AWS Cloud Map and created the mesh and the DNS entries by virtue of adding the service discovery attributes.
You may verify this, with the following command:
This should resolve to the backend service.
Test the application
The front service in cluster1 has been exposed as a load balancer and can be used directly.
You can also test it using a simple curler pod, like so:
Note: For this to work, you need to open port 8080 on security group applied on the cluster2 Node group to the cluster1’s security group. See screenshots provided below.
Security groups
Inbound access
Great! You have successfully tested the service communication across clusters using the App Mesh and AWS Cloud Map.
Lets make a few requests and check that our X-Ray side car is indeed capturing traces.
Verify X-Ray console
Run the following command from a curler pod within Cluster1.
Let’s take a look at our X-Ray console:
Traces:
Service Map:
You will notice that the requests are intercepted by the Envoy proxy. The proxy is essentially a side car container, which is deployed alongside the application containers.
Summary
In this walkthrough, we created two EKS clusters within a VPC, created frontend service in one cluster and backend services in the other cluster. We created an AWS App Mesh that spans both clusters and leveraged AWS Cloud Map to discover services so they could communicate. This can be expanded to multiple other clusters not necessarily EKS, but a mix of ECS, EKS, Fargate, EC2 etc.
Resources
AWS App Mesh Documentation
AWS CLI
AWS Cloud Map
Currently available AWS Regions for App Mesh
Envoy Image
Envoy documentation
EKS