Containers
Getting started with AWS App Mesh and Amazon EKS
NOTICE: October 04, 2024 – This post no longer reflects the best guidance for configuring a service mesh with Amazon EKS and its examples no longer work as shown. Please refer to newer content on Amazon VPC Lattice.
——–
In this blog post we explain service mesh usage in containerized microservices and walk you through a concrete example of how to get started with AWS App Mesh with Amazon EKS.
Increasingly, AWS customers adopt microservices to build scalable and resilient applications, reducing time-to-market. When moving from a monolithic to a microservices architecture, you break an app into a smaller set of microservices that are easier to develop and operate. You can use Amazon Elastic Kubernetes Service (EKS) and Amazon Elastic Container Service (ECS) to make it easier to run, upgrade, and monitor containerized microservices in containers at scale.
Services meshes such as AWS App Mesh help you to connect services, monitor your application’s network, and control the traffic flow. When an application is running within a service mesh, the application services are run alongside proxies which form the data plane of the mesh. The microservice process executes the business logic and the proxy is responsible for service discovery, observability, network encryption, automatic retries, and traffic shaping. App Mesh standardizes the way your services communicate, giving you consistent visibility and network traffic controls for all your containerized microservices. It has two core components: a fully managed control plane that configures the proxies and a data plane consisting of Envoy proxies, running as sidecar containers.
Using App Mesh with EKS
Amazon EKS is a managed service that makes it easy for you to run Kubernetes on AWS without needing to operate your own Kubernetes cluster. You can use App Mesh to implement a service mesh for applications running on EKS. We make using App Mesh with EKS straightforward with the AWS App Mesh Controller For K8s. This is an open source project helping you to manage App Mesh resources using the Kubernetes API. That is, you use, for example, kubectl
to configure App Mesh, as we will show you in the hands-on part, below.
You can use App Mesh to connect services running in EKS with those running on ECS, EC2, and even in your datacenter using AWS Outposts. In this article, we’ll focus on working with App Mesh and EKS. First, let’s review some App Mesh features that can enhance microservices running in Kubernetes.
Network controls
App Mesh allows you to control the flow of traffic between services, which can help you experiment with new features. You can use this feature to divert a portion of traffic to a different version of your service. Kubernetes doesn’t allow you define a split for requests distribution between multiple Deployments. With App Mesh, you can create rules to distribute traffic between different versions of a service using simple ratios.
App Mesh traffic controls can also make version rollouts significantly safer by enabling canary deployments. In this strategy, you create a new Kubernetes deployment with fewer pods alongside your old deployment and divert a small share of traffic to the new deployment. If the new version performs well, you gradually increase traffic to the new deployment until it ends up serving all the traffic.
You can also use App Mesh to improve application resiliency by implementing a connection timeout policy or configuring automatic retries in the proxy.
Observability
Observability is a property of a system that determines how well its state can be inferred from knowledge of external outputs. In the microservices context, these external outputs are service metrics, traces, and logs. Metrics show the behavior of a system over time. Logs make it easy to troubleshoot by providing causation for potential errors. Distributed traces can help us debug, identify problematic components in the application by providing details for a specific point in time, and understand application workflow within and among microservices.
You can measure the health of your application by configuring App Mesh to generate metrics such as total requests, create access logs and traces. As service traffic passes through Envoy, Envoy inspects it and generates statistics, creates access logs, and adds HTTP headers to outbound requests, which can be used to generate traces. Metrics and traces can be forwarded to aggregation services like Prometheus and X-Ray daemon, which can then be consumed to analyze the system’s behavior. Since App Mesh uses Envoy, it is also compatible with a wide range of AWS partner and open source tools for monitoring microservices.
Encryption in transit
Microservices communicate with each other over the network, which means they may pass classified data over the network. Many customers want to encrypt traffic between services. App Mesh can help you with that – it can encrypt traffic between services using a TLS certificate, and you don’t need to handle TLS negotiation and termination in your application code.
You can use your own certificates to encrypt the traffic, or you can use the AWS Certificate Manager. If you choose the latter, ACM will automatically renew certificates that are nearing the end of their validity, and App Mesh will automatically distribute the renewed certificates.
App Mesh Concepts
To use App Mesh, you will need to create a Mesh
. A mesh acts as a logical boundary in which all the microservices will reside. You can think of it as a “neighborhood” that comprises your microservices:
The next component is the Virtual Service
. Virtual Services act as virtual pointers to your applications and are the service names your applications use to reach the endpoints defined in your mesh. In a microservices architecture, each microservice will be a Virtual Service and will have a virtualServiceName
. An App Mesh Virtual Service is not the same as a Kubernetes Service.
A Virtual Service represents an application, but an application can also have multiple versions. For example, an application can have two different versions: an internal and a public-facing. Each version is represented by a Virtual Node
. As shown in the image above, a Virtual Service can have just one Virtual Node, or it can have multiple Virtual Nodes if the application has multiple versions. If a Virtual Service has multiple Virtual Nodes, you will define how traffic is routed between multiple Virtual Nodes using a Virtual Router
.
Virtual Routers handles traffic routing based on specific rules; these rules are called Virtual Routes. A Virtual Router needs to have at least one Virtual Route. The routing logic can be based on different criteria such as HTTP headers, URL paths, or gRPC service and method names. You can also use Virtual Routers to implement retry logic and error handling.
App Mesh With EKS In Action
In this tutorial, you will create AWS App Mesh components and deploy them using a sample application called Yelb. After placing the Yelb app into a service mesh, you will create a new version of the Yelb application server and use App Mesh Virtual Routes to shift traffic between the two versions of the app.
Yelb allows users to vote on a set of alternatives like restaurants and dynamically updates pie charts based on the votes. Additionally, Yelb keeps track of the number of page views and prints the hostname of the yelb-appserver
instance serving the API request upon a vote or a page refresh. Yelb components include:
- A frontend called
yelb-ui
is responsible for vending the JS code to the browser. - An application server named
yelb-appserver
, a Sinatra application that reads and writes to a cache server (redis-server
) and a Postgres backend database (yelb-db
). - Redis stores the number of page views and Postgres stores the votes.
Yelb’s architecture looks like this:
NOTE: Yelb’s configuration uses ephemeral storage for all the containers. Running databases in this way is only done for demonstration purposes.
To follow along, you will need to have an environment with some tooling. We used an AWS Cloud9 instance to run this tutorial and if you want to create a Cloud9 instance in your account, follow the steps in the EKS Workshop from the chapter Create a Workspace to Update IAM Settings for your Workspace.
1. Set Up The Infrastructure
To run this tutorial ,you need to install some specific tools:
Start by cloning the GitHub repository:
git clone https://github.com/aws/aws-app-mesh-examples.git
cd aws-app-mesh-examples/walkthroughs/eks-getting-started/
If you are using a Cloud9 instance, run the following commands to install the required tools mentioned above:
./cloud9-startup.sh && source ~/.bash_profile > /dev/null
You will use a CloudFormation template to create a VPC including a Security Group and two ECR repositories. The baseline.sh
script deploys the CloudFormation Stack and creates the base infrastructure with a VPC, Public and Private Subnets and IAM Policy. So, to kick off things, execute it:
./baseline.sh
Note that the above script takes around five minutes to complete.
2. Create The EKS Cluster
To create the EKS cluster, run the following command which will take some 15 minutes to finish:
./infrastructure/create_eks.sh
Once completed, you can test the cluster connectivity like so:
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 172.20.0.1 <none> 443/TCP 14m
3. Deploy A Demo App
In order to deploy our demo app called Yelb execute the following:
kubectl apply -f infrastructure/yelb_initial_deployment.yaml
To get the URL of the load balancer for carrying out testing in your browser, use the following command:
$ kubectl get service yelb-ui -n yelb
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
yelb-ui LoadBalancer 172.20.19.116 a96f5309de3csdc19847045bfbd34bd4-XXXXXXXXXX.us-west-2.elb.amazonaws.com 80:30467/TCP 3m33s
Note that the URL of the public load balancer is available via the EXTERNAL-IP
field. You may have to wait a few minutes for DNS propagation. When you open said URL in your browser of choice, the result should look as follows:
4. Meshify The Demo App
To start creating the App Mesh resources and add the Yelb app into a Mesh, the first thing you need to do is install the AWS App Mesh Controller. This controller allows you to configure App Mesh resources using kubectl
. If you’d like you can also use the App Mesh console for configuration, in this tutorial we will use kubectl
. Once completed, the resulting setup looks as follows:
You will be using Helm to install the App Mesh Controller. Helm is an open source project that makes it easier to define, install and upgrade applications into a Kubernetes cluster. So you need to add the Amazon EKS Helm chart repository to Helm:
helm repo add eks https://aws.github.io/eks-charts
Next, you create a namespace for the App Mesh controller that is looking after the custom resources:
kubectl create ns appmesh-system
And now it’s time for you to install the App Mesh controller with:
helm upgrade -i appmesh-controller eks/appmesh-controller \
--namespace appmesh-system
To confirm that the App Mesh controller is running by listing the pods in the appmesh-system
namespace:
$ kubectl get pods -n appmesh-system
NAME READY STATUS RESTARTS AGE
appmesh-controller-66b749c78b-67n68 1/1 Running 0 6s
We deploy the Yelb application in the yelb
namespace and use the same name for the mesh. You need to add two labels to the yelb
namespace: mesh
and appmesh.k8s.aws/sidecarInjectorWebhook
. These labels instruct the controller to inject and configure the Envoy proxies in the pods:
kubectl label namespace yelb mesh=yelb
kubectl label namespace yelb appmesh.k8s.aws/sidecarInjectorWebhook=enabled
Great! Now we’re in a position to create the mesh, using:
# Create the manifest with the mesh config:
$ cat <<"EOF" > /tmp/eks-scripts/yelb-mesh.yml
apiVersion: appmesh.k8s.aws/v1beta2
kind: Mesh
metadata:
name: yelb
spec:
namespaceSelector:
matchLabels:
mesh: yelb
EOF
# Create mesh with above configuration:
$ kubectl apply -f /tmp/eks-scripts/yelb-mesh.yml
NOTE: The namespaceSelector
parameter matches Kubernetes namespaces with the label mesh
and the value yelb
.
If you want to, you can also use the AWS console to validate that the mesh was created properly:
After creating the Service Mesh, you have to create all the App Mesh components for every Yelb component. You’ll be using the YAML files in the infrastructure/appmesh_template
directory to create the Virtual Nodes, Virtual Routers, Routes, and Virtual Services. Apply these configurations using the following command:
kubectl apply -f infrastructure/appmesh_templates/appmesh-yelb-redis.yaml
kubectl apply -f infrastructure/appmesh_templates/appmesh-yelb-db.yaml
kubectl apply -f infrastructure/appmesh_templates/appmesh-yelb-appserver.yaml
kubectl apply -f infrastructure/appmesh_templates/appmesh-yelb-ui.yaml
The App Mesh controller is configured to inject Envoy sidecar containers, but it hasn’t done so yet. This is because sidecars are only injected when a Pod is created. So, delete the existing pods using kubectl -n yelb delete pods --all.
This will make trigger the creation of new pods with the Envoy sidecars. To validate that the controller has worked fine, check the number of containers running in each pod:
$ kubectl -n yelb get pods
NAME READY STATUS RESTARTS AGE
redis-server-7dc845588-qbxv2 2/2 Running 0 48s
yelb-appserver-7d644fbf76-d642g 2/2 Running 0 48s
yelb-db-76bdb465fc-857fm 2/2 Running 0 48s
yelb-ui-859595cdb8-j2cqj 2/2 Running 0 48s
Notice that every pod in this namespace now has two containers, so let’s have a look at it using:
# Get a single pod:
$ YELB_POD=$(kubectl -n yelb get pod \
--no-headers=true \
--output name \|
awk -F "/" '{print $2}' \|
head -n 1)
# Describe the pod:
$ kubectl -n yelb describe pod $YELB_POD
...
envoy:
Container ID: docker://1dbf736bfb6e89a1f4c9a567dbbfeeca26a0d846eda14f667ba128d4b5b36233
Image: 840364872350.dkr.ecr.us-west-2.amazonaws.com/aws-appmesh-envoy:v1.12.3.0-prod
Image ID: docker-pullable://840364872350.dkr.ecr.us-west-2.amazonaws.com/aws-appmesh-envoy@sha256:f7ba6f019430c43f4fbadf3035e0a7c1555362a56a79d2d84280b2967595eeaf
Port: 9901/TCP
Host Port: 0/TCP
State: Running
Started: Wed, 13 May 2020 18:19:12 +0000
Ready: True
Restart Count: 0
Requests:
cpu: 10m
memory: 32Mi
Environment:
APPMESH_VIRTUAL_NODE_NAME: mesh/yelb/virtualNode/redis-server-virtual-node
APPMESH_PREVIEW: 0
ENVOY_LOG_LEVEL: info
AWS_REGION: us-west-2
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-tj7xx (ro)
...
You can now go back to Yelb’s web interface and make sure that you can access it. Any recorded votes are now lost since we deleted the pods in the previous step.
5. Traffic Shaping With A New App Version
Now that Yelb is meshified, go ahead and create a new version of the yelb-appserver
. We now use App Mesh to send the traffic to this new app version of the application. To do so, create a new container image with the updated code and push it to an ECR repository using the following command:
./build-appserver-v2.sh
Next, create a new Virtual Node
that will represent this new app version:
kubectl apply -f infrastructure/appmesh_templates/appmesh-yelb-appserver-v2.yaml
Also, we need a to create a new deployment using the manifest generated by build-appserver-v2.sh
:
kubectl apply -f infrastructure/yelb_appserver_v2_deployment.yaml
You should be able to see a new version of the yelb-appserver
running by listing the pods in the yelb
namespace:
$ kubectl get pods -n yelb
NAME READY STATUS RESTARTS AGE
redis-server-7dc845588-vfdld 2/2 Running 0 107m
yelb-appserver-7d644fbf76-2gmmc 2/2 Running 0 107m
*yelb-appserver-v2-658d6647d6-9t8x5 2/2 Running 0 21s
*yelb-db-76bdb465fc-fltjj 2/2 Running 0 107m
yelb-ui-68455b649b-677fb 2/2 Running 0 107m
Now you configure the App Mesh Virtual Route to send 50% of the traffic to version v2
and 50% to the current one. Note that this is for demonstration purposes, for production use it is advisable to roll new versions out more granularly.
The architecture diagram below shows the environment with two versions of the yelb-appserver
running at the same time:
To change the Virtual Route, run the following command:
kubectl apply -f ./infrastructure/appmesh_templates/appmesh-virtual-router-appserver-v1-v2.yaml
After modifying the Virtual Route, I can reload the Yelb page a couple of times and see that some of my requests are being served by the old version of the yelb-appserver
while others see the new version. You can find out the version by looking at the App Server
field, where the old version will bring the hostname of the yelb-appserver container and the new version will show ApplicationVersion2:
Finally, let’s change the Virtual Route to route all traffic to the newest version of yelb-appserver
:
kubectl apply -f infrastructure/appmesh_templates/appmesh-virtual-router-appserver-v2.yaml
You can see that after changing the Virtual Route again, the yelb-appserver-v2
deployment handles all requests:
6. Cleaning Up
In order to cleanup all the resources created during the execution of this tutorial, run the cleanup script with the following command:
./infrastructure/cleanup.sh
Note that if you followed these steps using a Cloud9 instance, refer to the cleanup steps for the Cloud9 instance as described in the EKS Workshop.
Next Steps & Conclusion
You will find Weave Flagger helpful if you are interested in automating canary deployments. Flagger allows you to promote canary deployments using AWS App Mesh automatically. It uses Prometheus metrics to determine canary deployment success or failure and uses App Mesh’s routing controls to shift traffic between the current and canary deployment automatically.
Further, some useful links if you want to dive deeper into the topic:
- Check out the aws-app-mesh-examples repo on GitHub.
- The App Mesh Developer Guide via the docs contain more tips and tricks.
In this post we went through the fundamentals of App Mesh and showed how to place an existing Kubernetes application into a mesh using the open source App Mesh Controller for K8s. You also learned how you can try different deployment techniques by using Virtual Routes to split traffic between two versions of an application. In the next blog, we will show you how you can use App Mesh Virtual Gateways to provide connectivity inside and outside the mesh.
You can track upcoming features via the App Mesh roadmap and experiment with new features using the App Mesh preview channel. Last but not least: do check out appmeshworkshop.com to learn more App Mesh in a hand-on fashion and join us on the App Mesh Slack community to share experiences and discuss with the team and your peers.