AWS Open Source Blog

Consistent OIDC authentication across multiple EKS clusters using Kube-OIDC-Proxy

Amazon Elastic Kubernetes Service (Amazon EKS) authenticates users against IAM before they’re granted access to an EKS cluster. Access to each cluster is controlled by the aws-auth ConfigMap, a file that maps IAM users/roles to Kubernetes RBAC groups. In this guest post from Josh Van Leeuwen from Jetstack, we look at how we can use several open source projects to authenticate users against an OIDC provider like GitHub across multiple clusters.

Kube-OIDC-Proxy is an open source reverse proxy developed by Jetstack to enable OIDC authentication to various backends. In the case of EKS, it can be used for OIDC authentication to multiple EKS clusters using the same user identity given by a third party provider. This post will explore how Kube-OIDC-Proxy works, how to deploy it into multiple EKS clusters and how to leverage other open source tooling to provide a seamless authentication experience to end users.

How does it work?

OIDC or OpenID Connect, is a protocol that extends the existing OAuth 2.0 protocol. The OIDC flow starts with a user requesting a JSON Web Token from an identity provider that contains an appropriately scoped list of attributes about the user. The contents includes attributes such as an email address or name, a header containing extra information about the token itself, e.g. the signature algorithm, and finally the signature of the token that has been signed by the identity provider. This signature is used by the resource server to verify the the token contents using the Certificate Authority presented by the identity provider.

Kube-OIDC-Proxy is a reverse proxy based on Kubernetes internals that authenticates requests using OIDC. These authenticated requests are then forwarded to some backend, such as a Kubernetes API Server, with appended impersonation headers based on the identity verified by the incoming OIDC token. This pairs well with Kubernetes API servers where OIDC authentication is non-configurable, as requests can be impersonated through RBAC by the Kube-OIDC-Proxy using the identity from the OIDC token.

Deployment

To create a fully functioning multi-cluster OIDC authentication infrastructure, we can make use of other open source projects to integrate with external OIDC providers, as well as provide a friendly experience to end users. Namely, we will be using the following projects:

  • Cert-Manager: A certificate management tool used to request and automate the renewal of TLS certificates in Kubernetes, including certificates from Let’s Encrypt.
  • Dex: An OIDC provider that provides connectors for external OAuth providers to obtain an identity; in this case, a GitHub application will be used. A single instance of Dex will be deployed into the master cluster that will service all other components in all clusters including signing the OIDC tokens.
  • Gangway: A website to facilitate the OAuth flow for obtaining the necessary user identity to form OIDC tokens from providers and automatically generate Kubeconfigs based on these tokens for end users.
  • Contour: An Envoy Proxy-backed ingress controller to route traffic to the different components using a single publicly exposed load balancer. TLS passthrough is enabled, meaning TLS connections terminate at each app rather than on the edge, preventing unencrypted traffic being passed around the cluster.
  • NGINX: Used as an HTTP web server to provide a landing page to help end users easily navigate to different Gangways across clusters. A single instance will be deployed to the master cluster to service all clusters.
  • Kube-OIDC-Proxy: The reverse proxy that users will authenticate through to access the API server in each cluster.

Deploying Kube-OIDC-Proxy

First, clone the kube-oidc repository from GitHub:

git clone https://github.com/jetstack/kube-oidc-proxy.git

Create one or more EKS clusters that will be used for Kube-OIDC-Proxy to authenticate against. Working from the Kube-OIDC-Proxy repository, copy the EKS Terraform module to any number and names desired.

Note: This step requires Hashicorp’s Terraform. You can the Terraform CLI from the the Terraform downloads page. Extract the binary from the zip file and copy the binary into your path. Further instructions can be found by visiting Installing Terraform.

Note: EKS clusters are created in the eu-west-1 region by default. If you want to create clusters in another region, update the kube-oidc-proxy/demo/infrastructure/amazon/providers.tf file with the name of your preferred region before completing this step.

$ cd demo
$ cp -r infrastructure/amazon infrastructure/amazon_cluster_1
$ cp -r infrastructure/amazon infrastructure/amazon_cluster_2
$ cp -r infrastructure/amazon infrastructure/amazon_cluster_3

Apply the terraform configuration to spin up these clusters:

$ CLOUD=amazon_cluster_1 make terraform_apply
$ CLOUD=amazon_cluster_2 make terraform_apply
$ CLOUD=amazon_cluster_3 make terraform_apply

Once completed, there should be three clusters created. There should also be three Kubeconfig files to authenticate to these clusters as per the usual method through AWS.

$ eksctl get clusters
NAME            REGION
cluster-2e56deb2    eu-west-1
cluster-325e7cb3    eu-west-1
cluster-b8706149    eu-west-1
$ ls -a
... .kubeconfig-amazon_cluster_1 .kubeconfig-amazon_cluster_2 .kubeconfig-amazon_cluster_3 ...

Note: If the previous step fails to create worker nodes or the kubeconfig files, re-run terraform_apply.

To deploy the manifests, we will be using Jsonnet to generate the manifests that will be deployed into each cluster. These resulting manifests are configured through a single config.jsonnet file that changes options around the clusters. When finished, this file should be stored in the kube-oidc-proxy/demo directory.

The following is an example of what the config.jsonnet file will look like when you’re finished. The next few steps walk you though creating the file.

local main = import './manifests/main.jsonnet';

function(cloud='amazon_cluster_1') main {
  cloud: cloud,
  clouds+: {
    amazon_cluster_1: {
      master: true,
      domain_part: '.cluster-1',
      config: import './manifests/amazon_cluster_1-config.json',
    },
    amazon_cluster_2: {
      master: false,
      domain_part: '.cluster-2',
      config: import './manifests/amazon_cluster_2-config.json',
    },
    amazon_cluster_3: {
      master: false,
      domain_part: '.cluster-3',
      config: import './manifests/amazon_cluster_3-config.json',
    },
    google: null,
    amazon: null,
    digitalocean: null,
  },
  base_domain: '.eks.aws.joshvanl.com',
  cert_manager+: {
    letsencrypt_contact_email:: 'xxxxxx@gmail.com',
    solvers+: [
      {
        http01: {
          ingress: {},
        },
      },
    ],
  },
  dex+: if $.master then {
    connectors: [
      $.dex.Connector('github', 'GitHub', 'github', {
        clientID: 'xxx',
        clientSecret: 'xxx',
        homePage: 'eks.aws.joshvanl.com',
      }),
    ],
  } else {
  },
}

First, we reference each cluster that we have created along with the secrets that were generated, as this will be used to build the authentication infrastructure. We also need to denote a single master cluster to house the Dex and Landing web server deployment which service all the other clusters in our infrastructure.

function(cloud='amazon_cluster_1') main {
  cloud: cloud,
  clouds+: {
    amazon_cluster_1: {
      master: true,
      domain_part: '.cluster-1',
      config: import './manifests/amazon_cluster_1-config.json',
    },
    amazon_cluster_2: {
      master: false,
      domain_part: '.cluster-2',
      config: import './manifests/amazon_cluster_2-config.json',
    },
    amazon_cluster_3: {
      master: false,
      domain_part: '.cluster-3',
      config: import './manifests/amazon_cluster_3-config.json',
    },
    google: null,
    amazon: null,
    digitalocean: null,
  },

Next, denote the base domain name that you own which will be used to connect to each of the clusters. The base domain will also be used for the URL you’ll use to connect to the landing page later.

Replace ‘.eks.aws.joshvanl.com’ with your own base domain.

  base_domain: '.eks.aws.joshvanl.com',

Configure cert-manager to set the account email address along with how we want to solve challenges when requesting certificates. Here we will solve challenges using HTTP01.

Replace ‘xxxxxx@gmail.com’ with your own contact email address.

  cert_manager+: {
    letsencrypt_contact_email:: 'xxxxxx@gmail.com',
    solvers+: [
      {
        http01: {
          ingress: {},
        },
      },
    ],
  },

Finally, set up the connector you would like to use with Dex.

  dex+: if $.master then {
    connectors: [
      $.dex.Connector('github', 'GitHub', 'github', {
        clientID: 'xxx',
        clientSecret: 'xxx',
        homePage: 'eks.aws.joshvanl.com',
      }),
    ],
  } else {
  },

These connectors are used to complete the OAuth flow to verify the identity of new users. When completing the OAuth flow, Dex will sign the OIDC token that contains the identity that was received and verified by that authority. In this case, we will be using a GitHub application. The OAuth flow will be resolved through users’ GitHub accounts and their identities based on their GitHub profiles. You can find more information on how to create GitHub OAuth applications in Building OAuth Apps.

With the clusters created and configuration written, we can begin to install the components.

Deploy the manifests into the master cluster:

$ CLOUD=amazon_cluster_1 make manifests_apply

This should install all components into the cluster correctly configured. In order to solve the HTTP01 challenge and make the cluster routable from the Internet with your chosen base domain, wait for an ExternalIP to expose the Service of type LoadBalancer.

$ export KUBECONFIG=.kubeconfig-amazon_cluster_1
$ kubectl get services --namespace auth
$ kc get svc -n auth
NAME                        TYPE           CLUSTER-IP       EXTERNAL-IP                                                               PORT(S)                      AGE
contour                     LoadBalancer   172.20.221.86    a21a97cf9d40e11e9b58302e1256987f-1040136959.eu-west-1.elb.amazonaws.com   443:31844/TCP,80:32636/TCP   105s
dex                         ClusterIP      172.20.147.161   <none>                                                                    5556/TCP                     104s
gangway                     ClusterIP      172.20.69.133    <none>                                                                    8080/TCP                     80s
kube-oidc-proxy             ClusterIP      172.20.60.178    <none>                                                                    443/TCP                      79s
landingpage                 ClusterIP      172.20.90.110    <none>                                                                    80/TCP                       79s

Once these are generated, create three CNAME records that point to the landing page URL, the Dex server, and s a wildcard record to point to the components inside that cluster. You can discover what ingress records have been made for this cluster like so:

$ kubectl get ingressroutes --namespace auth
NAME              FQDN                                             TLS SECRET   FIRST ROUTE   STATUS   STATUS DESCRIPTION
dex               dex.eks.aws.joshvanl.com                                      /             valid    valid IngressRoute
gangway           gangway.cluster-1.eks.aws.joshvanl.com                        /             valid    valid IngressRoute
kube-oidc-proxy   kube-oidc-proxy.cluster-1.eks.aws.joshvanl.com                /             valid    valid IngressRoute

$ kubectl get ingress --namespace auth
landingpage                 gangway.cluster-1.eks.aws.joshvanl.com,kube-oidc-proxy.cluster-1.eks.aws.joshvanl.com,dex.eks.aws.joshvanl.com + 1 more...             80, 443   14s

This would require three CNAME records like the following:

.cluster-1.eks.aws  CNAME 1h a21a97cf9d40e11e9b58302e1256987f-1040136959.eu-west-1.elb.amazonaws.com.
dex.eks.aws          CNAME 1h a21a97cf9d40e11e9b58302e1256987f-1040136959.eu-west-1.elb.amazonaws.com.
eks.aws              CNAME 1h a21a97cf9d40e11e9b58302e1256987f-1040136959.eu-west-1.elb.amazonaws.com.

Once the DNS records have been propagated across the Internet, the certificate HTTP01 challenges should succeed and certificates should be issued. You can check the status of the issuing certificates by checking the certificate resource statuses and the log output from the cert-manager controller:

$ kubectl get certificates --namespace auth
NAME              READY   SECRET                AGE
dex               True    dex-tls               13s
gangway           True    gangway-tls           13s
kube-oidc-proxy   True    kube-oidc-proxy-tls   12s
landingpage       True    landingpage-tls       12s

$ kubectl logs --namespace cert-manager cert-manager-xxx

Note: If you get a certificate error, recycle the Kube-OIDC-Proxy, Dex, and Gangway pods.

With the certificates issued, you should now be able to access the Kube-OIDC-Proxy Demo landing page:

Since we only have a single cluster deployed, only the first cluster will have access to a live Gangway. By clicking on the button for GANGWAY AMAZON_CLUSTER_1, we can request an OIDC token for our first cluster by authenticating to GitHub.

Once you’ve downloaded the kubeconfig, you should be able to connect to this cluster using our OIDC token using the Kube-OIDC-Proxy.

$ kubectl --kubeconfig ~/Downloads/kubeconfig get nodes
Error from server (Forbidden): nodes is forbidden: User "xxxxxx@gmail.com" cannot list resource "nodes" in API group "" at the cluster scope

This command fails because we have not yet applied any RBAC permissions for this user. It does, however, show that we are connecting as our GitHub identity. For the time being, we can assign this user cluster-admin rights but you will most likely want to create more restrictred and finer-grained permissions to the tenants of your clusters.

Replace xxxxxx@gmail.com with a valid GitHub ID.

$ kubectl create clusterrolebinding xxxxxx@gmail.com --clusterrole cluster-admin --user xxxxxx@gmail.com
clusterrolebinding.rbac.authorization.k8s.io/xxxxxx@gmail.com created
$ kubectl --kubeconfig ~/Downloads/kubeconfig get nodes
NAME                                       STATUS   ROLES    AGE     VERSION
ip-10-0-2-136.eu-west-1.compute.internal   Ready    <none>   32m   v1.14.6-eks-5047ed
ip-10-0-3-178.eu-west-1.compute.internal   Ready    <none>   32m   v1.14.6-eks-5047ed
ip-10-0-3-50.eu-west-1.compute.internal    Ready    <none>   32m   v1.14.6-eks-5047ed

Now that the first cluster has been configured, we are ready to repeat the process for the next two clusters. Remember to create new CNAME records that point to the new cluster endpoints so they are routable via the Internet and are able to solve HTTP01 challenges.

$ CLOUD=amazon_cluster_2 make manifests_apply
$ CLOUD=amazon_cluster_3 make manifests_apply

Neither of these non-master clusters will be deploying Dex or the landing page since those located in the master cluster to be shared across all clusters.,

When complete, all three clusters will be ready to have tokens requested and can be accessed using OIDC.

Closing thoughts

Using Kube-OIDC-Proxy enables organisations with multi-cluster, multi-tenant Kubernetes infrastructure to facilitate consistent OIDC authentication based on third party identity providers. In this post, I have demonstrated how combining other open source projects with Kube-OIDC-Proxy creates a simplified log-in experience for end users of the clusters.

Future development of the project will involve creating more options around how proxying requests with and without tokens are handled, as well as implementing auditing. You can find the project and follow the progress on GitHub here.

The content and opinions in this post are those of the third-party author and AWS is not responsible for the content or accuracy of this post.


 

Josh is a Customer Reliability Engineer at Jetstack helping customers through their Kubernetes story. He has been working with Kubernetes for three years on open source projects such as cert-manager, tarmak and Kube-OIDC-Proxy. Outside of work Josh enjoys cooking and yoga.

email: joshua.vanleeuwen@jetstack.io
Twitter: @JoshVanL
github.com/joshvanl

Jeremy Cowan

Jeremy Cowan

Jeremy Cowan is a Specialist Solutions Architect for containers at AWS, although his family thinks he sells "cloud space". Prior to joining AWS, Jeremy worked for several large software vendors, including VMware, Microsoft, and IBM. When he's not working, you can usually find on a trail in the wilderness, far away from technology.