Containers
Securing EKS Ingress With Contour And Let’s Encrypt The GitOps Way
This is a guest post by Stefan Prodan of Weaveworks.
In Kubernetes terminology, Ingress exposes HTTP(S) routes from outside the cluster to services running within the cluster. An Ingress can be configured to provide Kubernetes services with externally-reachable URLs while performing load balancing and SSL/TLS termination.
Kubernetes comes with an Ingress resource and there are several controllers that implement the ingress specification like the ELB ingress controller or NGINX. The Kubernetes Ingress specification is very limited thus most controllers had to rely on annotations to extend the routing features beyond the basics of what Ingress allows. But even with annotations there are some limitations hard to overcome, like cross-namespaces routing or weighted load balancing.
Contour (soon to be expected a CNCF project) is a modern ingress controller based on Envoy that expands upon the functionality of the Ingress API with a new specification named HTTPProxy. The HTTPProxy API allows for a richer user experience and addresses the limitations of the Ingress use in multi-tenant environments.
The HTTPProxy specification is flexible enough to facilitate advanced L7 routing policies based on HTTP header or cookie filters as well as weighted load balancing between Kubernetes services. These features make Contour suitable for automating Canary releases and A/B testing with Flagger.
This guide shows you how to set up a GitOps pipeline to securely expose Kubernetes services over HTTPS using:
- Amazon EKS and Amazon Route 53
- cert-manager to provision TLS certificates from Let’s Encrypt
- Contour as the ingress controller
- Flux as the GitOps operator
- podinfo as the demo web application
Create an EKS cluster
You’ll need an AWS account, a GitHub account, git
, kubectl
, and eksctl installed locally. First, create an EKS cluster with four EC2 nodes:
cat << EOF | eksctl create cluster -f -
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: my-cluster
region: eu-west-1
nodeGroups:
- name: controllers
labels: { role: controllers }
instanceType: m5.large
desiredCapacity: 2
iam:
withAddonPolicies:
certManager: true
albIngress: true
taints:
controllers: "true:NoSchedule"
managedNodeGroups:
- name: workers
labels: { role: workers }
instanceType: m5.large
desiredCapacity: 2
volumeSize: 120
EOF
The above command creates an EKS cluster with two node groups:
- The controllers node group has the IAM roles needed by cert-manager to solve DNS01 ACME challenges and will be used to run the Envoy proxy DaemonSet along with Contour and cert-manager.
- The workers managed node group is for the apps that will be exposed outside the cluster by Envoy.
A Kustomize patch is used to pin the workloads on node groups with selectors and tolerations, for example:
# contour/node-selector-patch.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: envoy
namespace: projectcontour
spec:
template:
spec:
nodeSelector:
role: controllers
tolerations:
- key: controllers
operator: Exists
We use Kustomize patches to avoid modifying the original manifests. You can update the manifests by running ./scripts/update-manifests.sh
, this script downloads the latest cert-manager and Contour YAML and overrides the manifests in the repo.
Install Flux
Flux is a GitOps operator for Kubernetes that keeps your cluster state is sync with a Git repository. Because Flux is pull based and also runs inside Kubernetes, you don’t have to expose the cluster credentials outside your production environment.
You can define the desired state of your cluster with Kubernetes YAML manifests and customise them with Kustomize. Flux implements a control loop that continuously applies the desired state to your cluster, offering protection against harmful actions like deployments deletion or policies altering.
Install fluxctl depending on your platform:
# macOS
brew install fluxctl
# Windows
choco install fluxctl
# Linux
curl -sL https://fluxcd.io/install | sh
On GitHub, fork this repository and clone it locally (replace stefanprodan
with your GitHub username):
git clone https://github.com/stefanprodan/eks-contour-ingress
cd eks-contour-ingress
Next, create the fluxcd
namespace using:
kubectl create ns fluxcd
And now, install Flux by specifying your fork URL (and again, replace stefanprodan
with your GitHub username):
export GHUSER="stefanprodan" && \
fluxctl install \
--git-user=${GHUSER} \
--git-email=${GHUSER}@users.noreply.github.com \
--git-url=git@github.com:${GHUSER}/eks-contour-ingress \
--git-branch=master \
--manifest-generation=true \
--namespace=fluxcd | kubectl apply -f -
Setup Git sync
At startup, Flux generates a SSH key and logs the public key. Find the public key with:
fluxctl identity --k8s-fwd-ns fluxcd
In order to sync your cluster state with git you need to copy the public key and create a deploy key with write access on your GitHub repository.
Open GitHub, navigate to your repository, go to Settings > Deploy keys click on Add deploy key, check Allow write access, paste the Flux public key and click Add key.
After a couple of seconds Flux will deploy Contour, cert-manager and podinfo in your cluster. You can check the sync status with watch kubectl get pods --all-namespaces
.
Configure DNS
Retrieve the external address of Contour’s Envoy load balancer:
$ kubectl get -n projectcontour service envoy -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP
envoy LoadBalancer 10.100.228.53 af4726981288e11eaade7062a36c250a-1448602599.eu-west-1.elb.amazonaws.com
Using the external address create a CNAME record in Route53 e.g. *.example.com
that maps to the LB address.
Verify your DNS setup using the host
command:
$ host podinfo.example.com
podinfo.example.com is an alias for af4726981288e11eaade7062a36c250a-1448602599.eu-west-1.elb.amazonaws.com.
Obtain Let’s Encrypt wildcard certificate
In order to obtain certificates from Let’s Encrypt you need to prove ownership by creating a TXT record with specific content that proves you have control of the domain DNS records. The DNS challenge and cert renewal can be fully automated with cert-manager and Route 53.
Next, create a cluster issues with Let’s Encrypt DNS01 solver (replace stefanprodan
with your GitHub username):
export GHUSER="stefanprodan" && \
cat << EOF | tee ingress/issuer.yaml
apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
annotations:
fluxcd.io/ignore: "false"
spec:
acme:
email: ${GHUSER}@users.noreply.github.com
privateKeySecretRef:
name: letsencrypt-prod
server: https://acme-v02.api.letsencrypt.org/directory
solvers:
- dns01:
route53:
region: eu-west-1
EOF
Create a certificate in the projectcontour
namespace (replace example.com
with your domain):
export DOMAIN="example.com" && \
cat << EOF | tee ingress/cert.yaml
apiVersion: cert-manager.io/v1alpha2
kind: Certificate
metadata:
name: cert
namespace: projectcontour
annotations:
fluxcd.io/ignore: "false"
spec:
secretName: cert
commonName: "*.${DOMAIN}"
dnsNames:
- "*.${DOMAIN}"
issuerRef:
name: letsencrypt-prod
kind: ClusterIssuer
EOF
And now it’s time to apply the changes via git
, the GitOps way:
git add -A && \
git commit -m "add wildcard cert" && \
git push origin master && \
fluxctl sync --k8s-fwd-ns fluxcd
Flux does a git<->cluster
reconciliation every five minutes, hence we use the fluxctl sync
command above to speed up the synchronization.
Wait for the certificate to be issued (it takes up to two minutes to complete):
$ watch kubectl -n projectcontour describe certificate
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal GeneratedKey 2m17s cert-manager Generated a new private key
Normal Requested 2m17s cert-manager Created new CertificateRequest resource "cert-1178588226"
Normal Issued 20s cert-manager Certificate issued successfully
Once the certificate has been issued, cert-manager will create a secret with the TLS cert; check like so:
$ kubectl -n projectcontour get secrets
NAME TYPE DATA AGE
cert kubernetes.io/tls 3 5m40s
We store the certificate in the Contour namespace so that we can reuse it for multiple apps deployed in different namespaces.
Expose services over TLS
In order to expose the demo app podinfo
, and make it accessible outside your cluster, you’ll be using Contour’s HTTPProxy custom resource definition.
For this, create a HTTPProxy by referencing the TLS cert secret (replace example.com
with your domain):
export DOMAIN="example.com" && \
cat << EOF | tee ingress/proxy.yaml
apiVersion: projectcontour.io/v1
kind: HTTPProxy
metadata:
name: podinfo
namespace: projectcontour
annotations:
fluxcd.io/ignore: "false"
spec:
virtualhost:
fqdn: podinfo.${DOMAIN}
tls:
secretName: cert
includes:
- name: podinfo
namespace: demo
EOF
HTTPProxies can include other HTTPProxy objects, the above configuration includes the podinfo
HTTPProxy from the demo
namespace, see:
$ cat podinfo/proxy.yaml
apiVersion: projectcontour.io/v1
kind: HTTPProxy
metadata:
name: podinfo
namespace: demo
spec:
routes:
- services:
- name: podinfo
port: 9898
We use the cross namespace inclusion to be able to load the shared TLS wildcard certificate.
So let’s apply the changes via git, for Flux to pick it up and sync the cluster:
git add -A && \
git commit -m "expose podinfo" && \
git push origin master && \
fluxctl sync --k8s-fwd-ns fluxcd
When TLS is enabled for a virtual host, Contour will redirect the traffic to the secure interface. To verify this, run (with your domain):
$ curl -vL podinfo.example.com
< HTTP/1.1 301 Moved Permanently
< location: https://podinfo.example.com/
Yay, we did it! Finally, let’s move on to a slightly more advanced topic: how to route traffic to new features using the progressive delivery paradigm.
Progressive Delivery
Progressive delivery is an umbrella term for advanced deployment patterns, including canary deployments, feature flags and A/B testing. We use these techniques to reduce the risk of introducing a new software version in production by giving app developers and SRE teams fine-grained control over the blast radius.
You can use Flagger together with Contour’s HTTPProxy to automate canary releases and A/B testing for your web apps. When using Flagger you would replace the podinfo
service and proxy definitions with a canary definition. Flagger generates the Kubernetes ClusterIP services and Contour HTTPProxy on its own based on the canary spec.
When you deploy a new version of an app, Flagger gradually shifts traffic to the canary, and at the same time, measures the requests success rate as well as the average response duration, these metrics are provided by Envoy and collected by Prometheus. You can extend the canary analysis with custom Prometheus metrics, acceptance and load testing to harden the validation process of your app release process.
If you want to give Flagger a try, there is a Contour progressive delivery tutorial available for you to use.