Containers

Diving into IAM Roles for Service Accounts

A common challenge architects face when designing a Kubernetes solution on AWS is how to grant containerized workload permissions to access an AWS service or resource. AWS Identity and Access Management (IAM) provides fine-grained access control where you can specify who can access which AWS service or resources, ensuring the principle of least privilege. The challenge when your workload is running in Kubernetes, however, is providing an identity to that Kubernetes workload that IAM can use for authentication.

In 2019, AWS introduced IAM Roles for Service Accounts (IRSA), leveraging AWS Identity APIs, an OpenID Connect (OIDC) identity provider, and Kubernetes Service Accounts to apply fine-grained access controls to Kubernetes pods. For more details, refer to the IRSA launch blog.

In this post, we plan to dive deeper into IAM roles for service accounts, helping you understand how the various pieces join together and what really happens behind the scene.

Walkthrough

In this walkthrough, we will show the journey and the concepts behind how Kubernetes Service Accounts can be leveraged to gain access to an AWS service and resource. We will start numerous Kubernetes Pods on an Amazon EKS cluster in an attempt to access Amazon S3.

Prerequisites

Detailed Steps

Let’s start by creating an EKS cluster.

$ eksctl create cluster \
--name eks-oidc-demo \
--region us-east-2

Next, we will create a single Kubernetes Pod with a restart policy to not attempt to restart the Pod once the workload inside the pod exits. The Kubernetes Pod leverages the amazon/aws-cli container image and executes an aws s3 list command inside the pod.

$ cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: eks-iam-test1
spec:
  containers:
    - name: my-aws-cli
      image: amazon/aws-cli:latest
      args: ['s3', 'ls']
  restartPolicy: Never
EOF

Using the kubectl get pod command, we can see that the pod has run. However, it has exited with an error.

$ kubectl get pod
NAME              READY   STATUS   RESTARTS   AGE
eks-iam-test1     0/1     Error    0          37s

Taking a look at the logs, we can see the root cause. We are getting access denied errors while calling List bucket (this is expected at this stage).

$ kubectl logs eks-iam-test1
An error occurred (AccessDenied) when calling the ListBuckets operation: Access Denied

Checking the AWS CloudTrail console within the AWS Management Console, we can get additional details about this error. Since we are trying to list the buckets, we can filter CloudTrail events by setting Event Names “List Buckets.”

Select a ListBuckets event in your CloudTrail logs, open an event log, and you will see a similar output to the following one:

{
...
  "userIdentity": {
    "type": "AssumedRole",
    "principalId": "xxxx",
    "arn": "arn:aws:sts::111122223333:assumed-role/eksctl-eks-oidc-demo-nodegroup-ng-NodeInstanceRole-xxxx/xxxx",
    "accountId": "111122223333",
    "accessKeyId": "AKIAIOSFODNN7EXAMPLE",
    "sessionContext": {
      "sessionIssuer": {
        "type": "Role",
        "principalId": "xxxx",
        "arn": "arn:aws:iam::xxxx:role/eksctl-eks-oidc-demo-nodegroup-ng-NodeInstanceRole-xxxx",
        "accountId": "111122223333",
        "userName": "eksctl-eks-oidc-demo-nodegroup-ng-NodeInstanceRole-xxxx"
      },
      "webIdFederationData": {},
      "attributes": {
        "creationDate": "2021-12-04T14:54:49Z",
        "mfaAuthenticated": "false"
      },
      "ec2RoleDelivery": "2.0"
    }
  },
  "eventTime": "2021-12-04T15:09:20Z",
  "eventSource": "s3.amazonaws.com",
  "eventName": "ListBuckets",
  "awsRegion": "us-east-2",
  "sourceIPAddress": "192.0.2.1",
  "userAgent": "[aws-cli/2.4.5 Python/3.8.8 Linux/5.4.156-83.273.amzn2.x86_64 docker/x86_64.amzn.2 prompt/off command/s3.ls]",
  "errorCode": "AccessDenied",
  "errorMessage": "Access Denied",
  "requestParameters": {
    "Host": "s3.us-east-2.amazonaws.com"
  },
...
}

Under the userIdentity section of the output, you can see that our workload running in the Kubernetes Pod is assuming an IAM Role attached to the Amazon EC2 instance and leveraging this role to try and list the S3 buckets. This is because no other AWS credentials were found in the container, so the SDK fell back to the IAM metadata server, as mentioned in the python boto3 sdk documentation.

As the IAM role within the EC2 Instance Profile does not have necessary permissions to list the buckets, the command received an “Access Denied” error.  One way to fix this could be to attach additional permissions to the EC2 instance profile. However, this violates a key security principle, the principle of least privilege. This additional permission would be at the EC2 Node level, not at the Kubernetes Pod level. Therefore, all Pods running on that node would gain access to our S3 buckets. We want to restrict this permission to the Pod level.

This leads us on to the next question: how could we inject AWS credentials into a container so the container does not default to the EC2 instance profile? Injecting AWS credentials via Kubernetes Secrets or environment variables would not be secure, and the user would have to manage the lifecycle of these credentials. We would not recommend either of those approaches.

Kubernetes Service Accounts

Kubernetes Pods are given an identity through a Kubernetes concept called a Kubernetes Service Account. When a Service Account is created, a JWT token is automatically created as a Kubernetes Secret. This Secret can then be mounted into Pods and used by that Service Account to authenticate to the Kubernetes API Server.

$ kubectl get sa
NAME          SECRETS   AGE
default       1         23h
$ kubectl describe sa default
Name:                default
Namespace:           default
Labels:              <none>
Annotations:         <none>
Image pull secrets:  <none>
Mountable secrets:   default-token-8j54j
Tokens:              default-token-8j54j
Events:              <none>

Unfortunately, this default token has a few problems that make it unusable for IAM authentication. First, it is only the Kubernetes API server that can validate this token. Second, these Service Account tokens do not expire, and rotating the signing key is a difficult process. You can view the token by retrieving the secret and passing it through the jwt-cli.

$ kubectl get secret default-token-m4tdn -o json | jq -r '.data.token' | base64 -d | jwt decode --json -
{
  "header": {
    "alg": "RS256",
    "kid": "LflWmdoop8Xt5sUnBFTmCLX0B8MnS1kcPSUcyjr8npw"
  },
  "payload": {
    "iss": "kubernetes/serviceaccount",
    "kubernetes.io/serviceaccount/namespace": "default",
    "kubernetes.io/serviceaccount/secret.name": "default-token-m4tdn",
    "kubernetes.io/serviceaccount/service-account.name": "default",
    "kubernetes.io/serviceaccount/service-account.uid": "5af7481a-2c9c-4613-9266-1037a23961a4",
    "sub": "system:serviceaccount:default:default"
  }
}

In Kubernetes 1.12 the ProjectedServiceAccountToken feature was introduced. This feature allows a fully compliant OIDC JWT token issued by the TokenRequest API of Kubernetes to be mounted into the Pod as a Projected Volume. The relevant Service Account Token Volume Projection flags are enabled by default on an EKS cluster. Therefore, fully compliant OIDC JWT Service Account tokens are being projected into each pod instead of the JWT token mentioned in the previous paragraph.

To inspect this OIDC Token, we can create a new pod that just has a sleep process inside with the following command:

$ cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: eks-iam-test2
spec:
  containers:
    - name: my-aws-cli
      image: amazon/aws-cli:latest
      command: ['sleep', '36000']
  restartPolicy: Never
EOF

As we have not specified a specific Kubernetes Service Account, the default Service Account is used, and an OIDC token is then generated for the Service Account and mounted into the Pod.

The token can be retrieved and expanded to show a fully compliant OIDC token.

$ SA_TOKEN=$(kubectl exec -it eks-iam-test2 -- cat /var/run/secrets/kubernetes.io/serviceaccount/token)
$ jwt decode $SA_TOKEN --json --iso8601
{
  "header": {
    "alg": "RS256",
    "kid": "689de1734321bcfdfbef825503a5ead235981e7a"
  },
  "payload": {
    "aud": [
      "https://kubernetes.default.svc"
    ],
    "exp": "2023-02-18T16:37:58+00:00",
    "iat": "2022-02-18T16:37:58+00:00",
    "iss": "https://oidc.eks.us-east-2.amazonaws.com/id/xxxx",
    "kubernetes.io": {
      "namespace": "default",
      "pod": {
        "name": "eks-iam-test2",
        "uid": "cd953361-41a2-4de0-b799-51f169920741"
      },
      "serviceaccount": {
        "name": "default",
        "uid": "5af7481a-2c9c-4613-9266-1037a23961a4"
      },
      "warnafter": 1645205885
    },
    "nbf": "2022-02-18T16:37:58+00:00",
    "sub": "system:serviceaccount:default:default"
  }
}

As you can see in the payload of this JWT, the issuer is an OIDC Provider. The audience for the token is https://kubernetes.default.svc. This is the address inside a cluster used to reach the Kubernetes API Server.

For security reasons, you may not want to include any token into a Kubernetes Pod if the workload in the Pod is not going to be making calls to the Kubernetes API server. This can be done by passing automountServiceAccountToken: false into the pod Spec when you create a Pod.

This compliant OIDC token now gives us a foundation to build upon to find a token that can be used to authenticate to AWS APIs. However, we will need an additional component to inject a second token for use with AWS APIs into our Kubernetes Pods. Kubernetes supports validating and mutating webhooks, and AWS has created an identity webhook that comes preinstalled in an EKS cluster. This webhook listens to create pod API calls and can inject an additional Token into our pods. This webhook can also be installed into self-managed Kubernetes clusters on AWS using this guide.

For the webhook to inject a new Token into our Pod, we are going to create a new Kubernetes Service Account, annotate our Service Account with an AWS IAM role ARN, and then reference this new Kubernetes Service Account in a Kubernetes Pod. The eksctl tool can be used to automate a few steps for us, but all of these steps can also be done manually.

The eksctl create iamserviceaccount command creates:

  • A Kubernetes Service Account
  • An IAM role with the specified IAM policy
  • A trust policy on that IAM role

Finally, it will also annotate the Kubernetes Service Account with the IAM Role Arn created.

We also need to associate IAM OIDC provider before creating a service account.

$ eksctl utils associate-iam-oidc-provider --region=us-east-2 --cluster=eks-oidc-demo --approve
$ eksctl create iamserviceaccount \
  --name my-sa \
  --namespace default \
  --cluster eks-oidc-demo \
  --approve \
  --attach-policy-arn $(aws iam list-policies --query 'Policies[?PolicyName==`AmazonS3ReadOnlyAccess`].Arn' --output text) 

Inspecting the newly created Kubernetes Service Account, we can see the role we want it to assume in our pod.

$ kubectl describe sa my-sa
Name: my-sa
Namespace: default
Labels: app.kubernetes.io/managed-by=eksctl
Annotations: eks.amazonaws.com/role-arn: arn:aws:iam::xxxx:role/eksctl-eks-oidc-demo-addon-iamserviceaccount-Role1-H47XCR6FPRGQ
Image pull secrets: <none>
Mountable secrets: my-sa-token-kv6kc
Tokens: my-sa-token-kv6kc
Events: <none>

Let’s see how this IAM role looks within the AWS Management Console. Navigate to IAM and then IAM Roles and search for the role. You will see the Annotations field when you describe your service account.

You can see that our AmazonS3ReadOnlyAccess policy has been applied to this role.

Select the Trust relationships tab and select Edit trust relationship to view the policy document.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Federated": "arn:aws:iam::111122223333:oidc-provider/oidc.eks.us-east-2.amazonaws.com/id/xxxx"
      },
      "Action": "sts:AssumeRoleWithWebIdentity",
      "Condition": {
        "StringEquals": {
          "oidc.eks.us-east-2.amazonaws.com/id/xxxx:aud": "sts.amazonaws.com",
          "oidc.eks.us-east-2.amazonaws.com/id/xxxx:sub": "system:serviceaccount:default:my-sa"
        }
      }
    }
  ]
}

You can see that this policy is allowing an identity system:serviceaccount:default:my-sa to assume the role using sts:AssumeRoleWithWebIdentity action. The principal for this policy is an OIDC provider.

Now let’s see what happens when we use this new Service Account within a Kubernetes Pod.

$ cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: eks-iam-test3
spec:
  serviceAccountName: my-sa 
  containers:
    - name: my-aws-cli
      image: amazon/aws-cli:latest
      command: ['sleep', '36000']
  restartPolicy: Never
EOF

If we inspect the Pod using Kubectl and jq, we can see there are now two volumes mounted into our Pod. The second one has been mounted via that mutating webhook. The aws-iam-token is still being generated by the Kubernetes API Server, but with a new OIDC JWT audience.

$ kubectl get pod eks-iam-test3 -o json | jq -r '.spec.containers | .[].volumeMounts'
[
  {
    "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount",
    "name": "kube-api-access-p2dlk",
    "readOnly": true
  },
  {
    "mountPath": "/var/run/secrets/eks.amazonaws.com/serviceaccount",
    "name": "aws-iam-token",
    "readOnly": true
  }
]

$ kubectl get pod eks-iam-test3 -o json | jq -r '.spec.volumes[] | select(.name=="aws-iam-token")'
{
  "name": "aws-iam-token",
  "projected": {
    "defaultMode": 420,
    "sources": [
      {
        "serviceAccountToken": {
          "audience": "sts.amazonaws.com",
          "expirationSeconds": 86400,
          "path": "token"
        }
      }
    ]
  }
}

If we exec into the running Pod and inspect this token, we can see that it looks slightly different from the previous SA Token.

$ IAM_TOKEN=$(kubectl exec -it eks-iam-test3 -- cat /var/run/secrets/eks.amazonaws.com/serviceaccount/token)
$ jwt decode $IAM_TOKEN --json --iso8601
{
  "header": {
    "alg": "RS256",
    "kid": "689de1734321bcfdfbef825503a5ead235981e7a"
  },
  "payload": {
    "aud": [
      "sts.amazonaws.com"
    ],
    "exp": "2022-02-19T16:43:55+00:00",
    "iat": "2022-02-18T16:43:55+00:00",
    "iss": "https://oidc.eks.us-east-2.amazonaws.com/id/xxxx",
    "kubernetes.io": {
      "namespace": "default",
      "pod": {
        "name": "eks-iam-test3",
        "uid": "6fd2f65f-4554-4317-9343-c8e5d28029c3"
      },
      "serviceaccount": {
        "name": "my-sa",
        "uid": "2c935d89-3ff0-425d-85c2-8236a6d626aa"
      }
    },
    "nbf": "2022-02-18T16:43:55+00:00",
    "sub": "system:serviceaccount:default:my-sa"
  }
}

You can see that the intended audience for this token is now sts.amazonaws.com, the issuer who has created and signed this token is still our OIDC provider, and finally, the expiration of the token is much shorter at 24 hours. We can modify the expiration duration for the service account using eks.amazonaws.com/token-expiration annotation in our Pod definition or Service Account definition.

The mutating webhook does more than just mount an additional token into the Pod. The mutating webhook also injects environment variables.

$ kubectl get pod eks-iam-test3 -o json | jq -r '.spec.containers | .[].env'
[
  {
    "name": "AWS_DEFAULT_REGION",
    "value": "us-east-2"
  },
  {
    "name": "AWS_REGION",
    "value": "us-east-2"
  },
  {
    "name": "AWS_ROLE_ARN",
    "value": "arn:aws:iam::111122223333:role/eksctl-eks-oidc-demo-addon-iamserviceaccount-Role1-1SJZ3F7H39X72"
  },
  {
    "name": "AWS_WEB_IDENTITY_TOKEN_FILE",
    "value": "/var/run/secrets/eks.amazonaws.com/serviceaccount/token"
  }
]

These environment variables are used by the AWS SDKs and the CLI when using assuming a role from a Web Identity. For example, see the python boto3 sdk. The SDK in our workload will now use these credentials instead of using the credentials found in the EC2 instance profile.

By default, the mutating webhook does not instruct the SDKs to use a Regional STS endpoint. Instead, they default to using the global legacy endpoint. AWS recommends adding the annotation eks.amazonaws.com/sts-regional-endpoints: "true" to the Kubernetes Service Account. This will inject the environment variable AWS_STS_REGIONAL_ENDPOINTS=regional into your workloads. For more information, see the documentation. This behavior will become the default in a future EKS Platform release.

Now that our workload has a token it can use to attempt to authenticate with IAM, the next part is getting AWS IAM to trust these tokens. AWS IAM supports federated identities using OIDC identity providers. This feature allows IAM to authenticate AWS API calls with supported identity providers after receiving a valid OIDC JWT. This token can then be passed to AWS STS AssumeRoleWithWebIdentity API operation to get temporary IAM credentials.

The OIDC JWT token we have in our Kubernetes workload is cryptographically signed, and IAM should trust and validate these tokens before the AWS STS AssumeRoleWithWebIdentity API operation can send the temporary credentials. As part of the Service Account Issuer Discovery feature of Kubernetes, EKS is hosting a public OpenID provider configuration document (Discovery endpoint) and the public keys to validate the token signature (JSON Web Key Sets – JWKS) at https://OIDC_PROVIDER_URL/.well-known/openid-configuration.

Let’s take a look at this endpoint. We can use the aws eks describe-cluster command to get the OIDC Provider URL.

$ IDP=$(aws eks describe-cluster --name eks-oidc-demo --query cluster.identity.oidc.issuer --output text)

# Reach the Discovery Endpoint
$ curl -s $IDP/.well-known/openid-configuration | jq -r '.'
{
  "issuer": "https://oidc.eks.us-east-2.amazonaws.com/id/xxxx",
  "jwks_uri": "https://oidc.eks.us-east-2.amazonaws.com/id/xxxx/keys",
  "authorization_endpoint": "urn:kubernetes:programmatic_authorization",
  "response_types_supported": [
    "id_token"
  ],
  "subject_types_supported": [
    "public"
  ],
  "claims_supported": [
    "sub",
    "iss"
  ],
  "id_token_signing_alg_values_supported": [
    "RS256"
  ]
}

In the above output, you can see the jwks (JSON Web Key set) field, which contains the set of keys containing the public keys used to verify JWT (JSON Web Token). Refer to the documentation to get details about the JWKS properties.

$ curl -s $IDP/keys | jq -r '.'
{
  "keys": [
    {
      "kty": "RSA",
      "e": "AQAB",
      "use": "sig",
      "kid": "b12d2f264e3eb3036bde33008066f24f9eafa28e",
      "alg": "RS256",
      "n": "xxx"
    },
    ...
  ]
}

The IAM service uses these public keys to validate the token. The workflow is as follows:

In order to accept the token from a pod, there has to be a trust established between IAM and the OIDC provider. In this article, since we have associated IAM OIDC provider while creating service account, this trust should be already setup.

If not, you can also set it up manually using below steps. To create a trust relationship manually, begin in the AWS Console and navigate to IAM, then Identity Providers, then Add an Identity Provider.

  1. We will select OpenID Connect as the provider type.
  2. Give the OIDC provider endpoint URL https://oidc.eks.us-east-2.amazonaws.com/id/xxxx.
  3. Select Get Thumbprint.
  4. The audience will be sts.amazonaws.com.
  5. Select Add Provider.

Finally, we can use our token in the Kubernetes pod to authenticate with the Amazon S3 APIs.

$ cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: eks-iam-test4
spec:
  serviceAccountName: my-sa 
  containers:
    - name: my-aws-cli
      image: amazon/aws-cli:latest
      args: ["s3", "ls"]
  restartPolicy: Never
EOF

After a few seconds, we should see that the pod has successfully completed. Checking the logs of the pod, we should see our s3 buckets

$ kubectl get pods
NAME                READY   STATUS      RESTARTS   AGE
eks-iam-test4       0/1     Completed   0          83s
$ kubectl logs eks-iam-test4
<s3 bucket list>

Clean Up

To avoid incurring future charges, the EKS cluster can be deleted using eksctl.

eksctl delete cluster \
  --name eks-oidc-demo \
  --region us-east-2

Conclusion

I hope you have enjoyed this journey and now have a good understanding of what really happens behind the scenes when we try to access AWS services from Pods. We have seen how AWS credentials will default to the EC2 instance profile if the workload cannot find any credentials and how Kubernetes Service Accounts and Service Account tokens can be used to give Pods Identities. Finally, we have seen how IAM can use an external OIDC identity provider and validate tokens to give temporary IAM credentials.

For further information on:

Gaurav Pilay

Gaurav Pilay

Gaurav Pilay is a Sr. Modernization Architect at AWS Professional Services. He helps Global Telco customers to conceptualize, design and realize their migration and modernization journey. He enjoys technical discussions on architecting and building scalable, highly available and secure solutions using AWS Cloud Services.