Containers

Announcing Container Image Signing with AWS Signer and Amazon EKS

Introduction

Today we are excited to announce the launch of AWS Signer Container Image Signing, a new capability that gives customers native AWS support for signing and verifying container images stored in container registries like Amazon Elastic Container Registry (Amazon ECR). AWS Signer is a fully managed code signing service to ensure trust and integrity of your code. AWS Signer manages code signing certificates, public and private keys, and provides a set of features to simplify lifecycle management so that you can focus on the functions of signing and verifying your code.

AWS Signer now supports signing and verifying container images, and is integrated with Notation, an open source Notary project within the Cloud Native Computing Foundation (CNCF). With contributions from AWS and others, Notary is an open standard and client implementation that allows for vendor-specific plugins for key management and other integrations. AWS Signer manages signing keys, key rotation, and PKI management for you, and is integrated with Notation through a curated plugin that provides a simple client-based workflow.

Notation uses new Open Containers Initiative (OCI) distribution features built into Amazon ECR that enable you to store signatures and other artifacts in the registry right alongside the images they refer to. This allows you to sign your container images with a simple command that handles the interaction with Amazon ECR transparently for you, while AWS Signer manages signing material and simplifies lifecycle management and revocation operations.

Background

Container images and distribution are designed in such a way that integrity checks are already in place. Every image and artifact is described by a manifest, which is referenced by a digest that is a hash of its content. This enables clients to easily verify the integrity of a given image, as it moves from the registry and into your build or workload environment. So, why is signing important? Signing establishes authenticity and provenance, giving you the ability to determine if content comes from a particular party. This allows you to only allow images from trusted parties to be used in your container image builds and deployments, and you can sign your own content to establish deployment policies.

The container image signing approach implemented in Notation leverages container image integrity features, and uses a simple mechanism to cryptographically sign an image manifest, rather than its layers. Image manifests are verifiable records of image content, and contain hashed digests for all layers of the image they describe. It’s as effective and much faster to sign the image digest rather than image layers, and this method works for remote images without needing to pull them fully to your signing environment. Once a signature is created, it is encoded as an OCI artifact and pushed to the image repository alongside the container image. At any point in time, a client can discover signatures for a given image and verify them against a trusted content publisher’s identity. This is the same sign and verify approach used for decades in software artifact and package distribution, with tools that fit well into container workflows.

Container image verification provides a logical gate for building trusted content policies for your deployments, for example with Kyverno or OPA Gatekeeper. This helps you ensure that only vetted and trusted images are running in your deployed workloads. Without image verification, policies can mandate that only images from certain registries or repositories can be deployed, or that only certain image tags are allowed to deploy in production environments. With image verification, you can go one step further and bring the trust of content into your policy decisions, only allowing use of images from known and vetted sources.

With this launch, AWS Signer brings fully managed signing and verification features for container images to a simple workflow through its integration with Notation. As shown in the solution walkthrough below, it is simple to get started with Signer and Notation, with simple commands to sign and verify your images.

Container Image Signing and AWS Signer

AWS Signer is a fully managed code signing service to help you ensure the trust and integrity of your container images. Organizations validate images against a digital signature to confirm that the image is unaltered and from a trusted publisher. With AWS Signer, your security administrators have a single place to define your signing environment, including what AWS Identity and Access Management (AWS IAM) role can sign code and in what regions. AWS Signer manages the code signing certificates and private keys and enables central management of the code signing lifecycle. Integration with AWS CloudTrail helps you track who is generating signatures and helps meet your compliance requirements.

AWS Signer supports features like cross account signing, signature validity duration, and profile lifecycle management with cancellation and revocation operations. Cross account signing allows security administrators to create and manage signing profiles in restricted accounts, and provides explicit permissions to other accounts such as developer or pipeline accounts to sign artifacts. This provides governance and audibility that can be automated with AWS CloudTrail logs from both accounts. When creating a signing profile, you can specify a signature validity period. Think of this as best-by-use date control to either warn or fail signature validation when someone verifies a signature after its validity period has ended. Finally, you can cancel your signing profile to disallow generating any more signatures from that profile, and revoke a profile to invalidate existing signatures generated after the revocation date/time. The cancel and revoke operations provide controls to respond to changes in governance or in response to security incidents, giving you full control over verification operations. AWS Signer also provides the flexibility to revoke individual signatures when you need to invalidate specific signatures on individual images.

Getting started

For this launch, container images stored in Amazon ECR and other OCI-compliant registries can be signed with Notation, using the AWS Signer plugin. To get started with Container Image Signing with AWS Signer, you will create an AWS Signer Signing Profile, install the Notation client along with the AWS Signer plugin, and configure the plugin to associate it with your signing profile. Once this simple process is complete, you can sign and verify images with simple commands using the Notation client.

For verification of your container workloads, Kubernetes is supported with this initial launch. Kyverno is an open source policy engine designed for Kubernetes. Kyverno policies are Kubernetes resources, and you use tools like kubectl and kustomize to manage your policies. AWS partner Nirmata has integrated the AWS Signer plugin with support for Notation in Kyverno and our solution below includes a walkthrough using it.

Additionally, the Ratify project is an open source verification engine which can be used with Open Policy Agent (OPA) Gatekeeper to define verification-based policies. Members of the Amazon Elastic Kubernetes Service (Amazon EKS) and AWS Signer teams have contributed to and help maintain the Ratify open source project. This project is nearing a 1.0 release, and has support for Notation with plugins including AWS Signer.

Finally, you may choose to build your own custom admission controller for Kubernetes, using the Dynamic Admission Controller approach. If you don’t use a Policy-as-Code (PaC) solution like OPA Gatekeeper or Kyverno and don’t have plans to adopt either, you can still use AWS Signer and Notation to ensure only trusted images are being used to deploy workloads in your Kubernetes clusters. The k8s-notary-admission OSS project is available as an example of this approach and not supported by AWS.

Solution overview

In this section we will dive deeper into container image signing with Notation and AWS Signer, followed by an example of how to use the Kyverno policy engine with Amazon EKS to validate container image signatures for use in Amazon EKS clusters.

Note:  The solution discussed in this post can also work with self-managed Kubernetes clusters, running in AWS.

Use case – Signing Amazon ECR container images with Notation and AWS Signer

For our solution, we used the Notation command-line interface (CLI) to sign container images stored in private Amazon ECR repositories. The cryptographic signing material—certificates, and public and private keys—is created and managed via AWS Signer signing profiles and the AWS CLI.

To get started, the first step is to make sure that your AWS CLI is updated to the latest version. Instructions are found at Installing or updating the latest version of the AWS CLI in the AWS Command Line Interface User Guide.

The next step is to install the Notation CLI and the required AWS Signer plugin and root certificate. The AWS Signer Developer Guide provides a list of installers for Linux, macOS, and Windows. For our example, we installed the macOS arm64 version.  The installer installed notation at /usr/local/bin/notation and the notation CLI configuration at /Users/<USERNAME>/Library/Application Support/notation. 

Once installed, the notation version—at the time of this writing—can be seen below.

# Get Notation CLI version
$ notation version
Notation - a tool to sign and verify artifacts.

Version:     1.0.0-rc.7
Go version:  go1.20.4
Git commit:  ebfb9ef707996e1dc11898db8b90faa8e8816ae6

A tree of the notation directory shows the directory structure. In the tree output you can clearly see that the AWS Signer plugin is installed, as well as the Notation truststore directory, and a Notation trustpolicy document.

# Notation directory structure
$ tree
.
├── LICENSE
├── THIRD_PARTY_LICENSES
├── plugins
│   └── com.amazonaws.signer.notation.plugin
│       ├── LICENSE
│       ├── THIRD_PARTY_LICENSES
│       └── notation-com.amazonaws.signer.notation.plugin
├── signingkeys.json
├── trustpolicy.json
└── truststore
    └── x509
        └── signingAuthority
            └── aws-signer-ts
                 └──aws-signer-notation-root.crt

Using the installer, our Notation client is configured with the correct AWS Signer plugin, and the truststore containing the AWS Signer root certificate. You can verify that notation is correctly configured to use the AWS Signer plugin with the following notation plugin ls command.

# List configured Notation plugins
$ notation plugin ls
NAME                                   DESCRIPTION                      VERSION         CAPABILITIES                                                                                             ERROR
com.amazonaws.signer.notation.plugin   AWS Signer plugin for Notation   1.0.0-fa04d83   [SIGNATURE_GENERATOR.ENVELOPE SIGNATURE_VERIFIER.TRUSTED_IDENTITY SIGNATURE_VERIFIER.REVOCATION_CHECK]   <nil>

To use notation with AWS Signer, you must create a signing profile in AWS Signer. The following command is used to create the notation_test signing profile.

# Create a AWS Signer signing profile with default validity period
$ aws signer put-signing-profile \
    --profile-name notation_test \
    --platform-id Notation-OCI-SHA384-ECDSA

The preceding command automatically sets the validity period of the signing profile to 135 months—11 years and 3 months—for the newly-provisioned signing profile. For advanced use cases—where you want to reject artifacts older than a specified period—you can set a shorter signature-validity-period, as seen below.

# Create a AWS Signer signing profile with specific validity period
$ aws signer put-signing-profile \
--profile-name notation_test \
--platform-id Notation-OCI-SHA384-ECDSA \
--signature-validity-period 'value=12, type=MONTHS' \

You can then list AWS Signer signing profiles with the following command.

# List existing signing profiles
$ aws signer list-signing-profiles
{
    "profiles": [
        {
            "profileName": "notation_test",
            "profileVersion": "vjPSTMwGW3",
            "profileVersionArn": "arn:aws:signer:<AWS_REGION>:<AWS_ACCOUNT_ID>:/signing-profiles/notation_test/vjPSTMwGW3",
            "signingMaterial": {},
            "signatureValidityPeriod": {
                "value": 12,
                "type": "MONTHS"
            },
            "platformId": "Notation-OCI-SHA384-ECDSA",
            "platformDisplayName": "Notation for Container Registries",
            "status": "Active",
            "arn": "arn:aws:signer:<AWS_REGION>:<AWS_ACCOUNT_ID>:/signing-profiles/notation_test",
            "tags": {}
        }
    ]
}

With the AWS Signer plugin installed and a signing profile in place, we are almost ready to sign our container images. For this example, we are using the Kubernetes pause container, stored in an Amazon ECR repository. The following notation sign command uses the container image digest, instead of the image tag. Moreover, it is considered a best practice to use the unique digest, as tags are considered mutable and can be non-deterministic.

Amazon ECR credentials and CLI tools

Before you can sign an Amazon ECR container image, you need to make sure that the Notation CLI has basic auth credentials to access Amazon ECR registries. We will explore two options for supplying Amazon ECR basic auth credentials to Notation:

  • Use the notation login command
  • Use a credential-helper

Both options require you to first configure your AWS STS account profile settings.

The following notation login command uses the aws ecr get-login-password command to get a password from the AWS CLI and use the basic auth credentials to login to a region-specific Amazon ECR registry. Since Amazon ECR basic auth credentials are specific to AWS regions, Notation must be authenticated with each Amazon ECR region in which Notation will operate.

# Login to Amazon ECR region with the Notation CLI
$ aws ecr get-login-password | notation login \
--username AWS \
--password-stdin <AWS_ACCOUNT_ID>.dkr.ecr.<AWS_REGION>.amazonaws.com
Login Succeeded

The credentials used in the notation login command are stored in the configured Notation credential store. On an M1 Mac this configuration is found in the notation directory, in the config.json file, seen below.

# Notation config.json file
{
    "auths": {},
    "credsStore": "osxkeychain"
}

In the above file, the credsStore element points to the osxkeychain. This is the default MacOS system credential store used by Notation, and is accessible via the Keychain Access application. This credential store is also used by other CLI tools, such as Docker and ORAS. In the osxkeychain, the credential is stored as a Docker Credentials type.

Credential helpers work below the CLI tools to gather needed credentials and make them available for CLI operations. The Amazon ECR Docker Credential Helper can be used as an alternative to using the aws ecr get-login-password command with CLI tools that use Amazon ECR basic auth credentials.

The Amazon ECR Docker Credential Helper uses the AWS SDK for Go v2 and the underlying AWS STS profile—used by the AWS CLI—to gather Amazon ECR basic auth credentials and temporarily store them for in-line use with CLI commands, like notation sign. This means that CLI login and logout commands are not used and Docker Credentials are not stored in the osxkeychain credential store. This eliminates the potential for CLI credential collisions and overriding errors.

Signing

Once logged in, or using the Amazon ECR Docker Credential Helper, the following Notation command, which references the configured AWS Signer plugin and signing profile, signs the container image using the image digest.

notation sign \
<AWS_ACCOUNT_ID>.dkr.ecr.<AWS_REGION>.amazonaws.com/pause\
@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d \
--plugin com.amazonaws.signer.notation.plugin \
--id arn:aws:signer:<AWS_REGION>:<AWS_ACCOUNT_ID>:/signing-profiles/notation_test

Note:  Even though Amazon ECR supports immutable tags as a repository-level configuration, tag-immutability is not supported with the OCI 1.0 reference specification that is used with this container image signing approach.

With the OCI 1.0 reference specification, container image signatures are stored along-side tagged container images in an OCI registry. In an Amazon ECR repository, the signatures are stored as seen in the following screenshot, where the untagged artifact is the actual signature. The Image Index contains a manifest that references the container image signature.

Clicking on the Other value in the Artifact Type column of the untagged artifact indicates that its type is an application/vnd.cncf.notary.signature.

Once signed, you can inspect the signature—including certificate chains and fingerprints—with the following notation inspect command. Again, use the container image digest.

# Inspect container image signatures with Notation
$ notation inspect <AWS_ACCOUNT_ID>.dkr.ecr.<AWS_REGION>.amazonaws.com/pause\
@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d

Inspecting all signatures for signed artifact
<AWS_ACCOUNT_ID>.dkr.ecr.<AWS_REGION>.amazonaws.com/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d
└── application/vnd.cncf.notary.signature
    └── sha256:ca78e5f730f9a789ef8c63bb55275ac12dfb9e8099e6a0a64375d8a95ed501c4
        ├── media type: application/jose+json
        ├── signature algorithm: ECDSA-SHA-384
        ├── signed attributes
        │   ├── expiry: Wed May 22 19:24:19 2024
        │   ├── com.amazonaws.signer.signingJob: arn:aws:signer:<AWS_REGION>:<AWS_ACCOUNT_ID>:/signing-jobs/3a0b44b4-714a-4296-8d94-dcc0a7bdfeb2
        │   ├── com.amazonaws.signer.signingProfileVersion: arn:aws:signer:<AWS_REGION>:<AWS_ACCOUNT_ID>:/signing-profiles/notation_test/vjPSTMwGW3
        │   ├── io.cncf.notary.verificationPlugin: com.amazonaws.signer.notation.plugin
        │   ├── signingScheme: notary.x509.signingAuthority
        │   └── signingTime: Mon May 22 19:24:19 2023
        ├── user defined attributes
        │   └── (empty)
        ├── unsigned attributes
        │   └── (empty)
        ├── certificates
        │   ├── SHA256 fingerprint: 581899293591f48b4fd82b6f636431a84784d79b10eeff340f44f887d328acc5
        │   │   ├── issued to: CN=AWS Signer,OU=AWS Cryptography,O=AWS,L=Seattle,ST=WA,C=US
        │   │   ├── issued by: CN=AWS Signer <AWS_REGION> Code Signing CA G1,OU=Cryptography,O=AWS,ST=WA,C=US
        │   │   └── expiry: Thu May 25 16:31:29 2023
        │   ├── SHA256 fingerprint: f0e6d676ae9ff152451f149c737a31f02ddcb093a1e3a5afefa6e931a7a59473
        │   │   ├── issued to: CN=AWS Signer <AWS_REGION> Code Signing CA G1,OU=Cryptography,O=AWS,ST=WA,C=US
        │   │   ├── issued by: CN=AWS Signer Code Signing Int CA G1,OU=Cryptography,O=AWS,ST=WA,C=US
        │   │   └── expiry: Sun Jan  7 03:23:48 2024
        │   ├── SHA256 fingerprint: eaaac975dcc0d5d160fca1e39834834f014a238cd224d053670982388ccbfca1
        │   │   ├── issued to: CN=AWS Signer Code Signing Int CA G1,OU=Cryptography,O=AWS,ST=WA,C=US
        │   │   ├── issued by: CN=AWS Signer Code Signing Root CA G1,OU=Cryptography,O=AWS,ST=WA,C=US
        │   │   └── expiry: Thu Oct 28 23:18:32 2027
        │   └── SHA256 fingerprint: 90a87d0543c3f094dbff9589b6649affe2f3d6e0f308799be2258461c686473f
        │       ├── issued to: CN=AWS Signer Code Signing Root CA G1,OU=Cryptography,O=AWS,ST=WA,C=US
        │       ├── issued by: CN=AWS Signer Code Signing Root CA G1,OU=Cryptography,O=AWS,ST=WA,C=US
        │       └── expiry: Tue Oct 27 22:33:22 2122
        └── signed artifact
            ├── media type: application/vnd.docker.distribution.manifest.v2+json
            ├── digest: sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d
            └── size: 527

In the preceding command output, we can see a hierarchical view of the signed artifact, as well as the signature and associated certificate chain and certificate fingerprints. This is very helpful should we need to troubleshoot and trace the provenance of signature material.

Use case – Container image verification with Notation CLI

Now that you have used the Notation CLI—with our AWS Signer signing profile—to sign a container image, you can move on to verifying the applied container image signature. Before we try image signature verification inside Kubernetes, we can verify the signature with the Notation CLI. For this process, you will need a valid Notation trustpolicy document.

As mentioned earlier in the Notation install, the trustpolicy was found in the notation directory tree. However, the trustpolicy installed by the installer is an empty file, and you must create a valid trustpolicy document. The trustpolicy document used for this example is shown below.

# Notation trustpolicy document
{
  "version": "1.0",
  "trustPolicies": [
    {
      "name": "aws-signer-tp",
      "registryScopes": [
        "<AWS_ACCOUNT_ID>.dkr.ecr.<AWS_REGION>.amazonaws.com/pause"
      ],
      "signatureVerification": {
        "level": "strict"
      },
      "trustStores": [
        "signingAuthority:aws-signer-ts"
      ],
      "trustedIdentities": [
        "arn:aws:signer:<AWS_REGION>:<AWS_ACCOUNT_ID>:/signing-profiles/notation_test"
      ]
    }
  ]
}

Note:  The registryScopes list does support a single wildcard—“*”—to set all registries/repository combinations in scope.

The preceding Notation trustpolicy document configures Notation to verify container image signatures using the AWS Signer signing profile you used to sign the container image. As you can see, the trustpolicy configures the following items:

  • Registry scope
  • Signature verification – configures AWS Signer singing profile revocation checks
  • Truststores used for verification
  • Trusted identities – AWS Signer signing profiles

The easiest way to configure the above trustpolicy is to use the notation policy import command and import a known-good JSON document.

# Import a known-good Notation trustpolicy JSON document
$ notation policy import trustpolicy.json

Trust policy configuration imported successfully.

You can see the imported and configured trustpolicy with the notation policy show command.

# Show the current Notation trustpolicy document
$ notation policy show{
  "version": "1.0",
  "trustPolicies": [
    {
      "name": "aws-signer-tp",
      "registryScopes": [
        "<AWS_ACCOUNT_ID>.dkr.ecr.<AWS_REGION>.amazonaws.com/pause"
      ],
      "signatureVerification": {
        "level": "strict"
      },
      "trustStores": [
        "signingAuthority:aws-signer-ts"
      ],
      "trustedIdentities": [
        "arn:aws:signer:<AWS_REGION>:<AWS_ACCOUNT_ID>:/signing-profiles/notation_test"
      ]
    }
  ]
}

With the Notation trustpolicy configured, you can now use Notation to verify the container image signature, as seen below.

# Verify the signed images with the currently configured Notation CLI
$ notation verify \
<AWS_ACCOUNT_ID>.dkr.ecr.<AWS_REGION>.amazonaws.com/pause\
@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d
Successfully verified signature for <AWS_ACCOUNT_ID>.dkr.ecr.<AWS_REGION>.amazonaws.com/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d

Use case – Container image verification in Kubernetes with Kyverno

Now that you have signed and verified container image signatures with the Notation CLI and AWS Signer, let’s move on to  container image validation in Kubernetes. In our example we will use Amazon EKS with Kubernetes Dynamic Admission Controllers and the Kyverno policy engine. The Kyverno-Notation-AWS Signer solution is found in the kyverno-notation-aws OSS project.

To get started, you must first have an Amazon EKS cluster in which to run Kyverno and test our solution. A cluster can be provisioned via the Amazon EKS cluster creation instructions. Once the cluster is created, you can get started with the Kyverno-Notation-AWS Signer solution, by following the install instructions. The first steps outlined in those instructions are general to all clusters and are summarized below:

  • Install cert-manager
  • Install the Kyverno policy engine
  • Install the kyverno-notation-aws application
  • Apply the Kubernetes custom resources definitions for the Notation TrustPolicy and TrustStore resources

Configure Kyverno

With the preceding steps complete, you can complete the last-mile steps to configure the Kyverno solution to use the Notation and AWS Signer configuration which we used earlier to sign the container images and verify the container image signatures. First, apply the correct TrustStore resource—seen below—for the Kyverno solution

apiVersion: notation.nirmata.io/v1alpha1
kind: TrustStore
metadata:
  name: aws-signer-ts
spec:
  trustStoreName: aws-signer-ts
  type: signingAuthority
  caBundle: |-
    -----BEGIN CERTIFICATE-----
    MIICWTCCAd6g...
    -----END CERTIFICATE-----

The caBundle element in the preceding TrustStore resource is the AWS Signer root certificate configured when we installed Notation and AWS Signer.

Next you need to apply the correct TrustPolicy resource that will tell the Notation Golang libraries (used in the Kyverno solution) to use the correct AWS Signer profile and TrustStore. This should be familiar, as we performed a similar setup with the the Notation CLI.

apiVersion: notation.nirmata.io/v1alpha1
kind: TrustPolicy
metadata:
  name: trustpolicy-sample
spec:
  version: '1.0'
  trustPolicies:
  - name: aws-signer-tp
    registryScopes:
    - "*"
    signatureVerification:
      level: strict
      override: {}
    trustStores:
    - signingAuthority:aws-signer-ts
    trustedIdentities:
    - "arn:aws:signer:<AWS_REGION>:<AWS_ACCOUNT_ID>:/signing-profiles/notation_test"

Next you need to update and apply our check-images Kyverno cluster policy to use the TLS certificate chain from the kyverno-notation-aws-tls secret with the following command. This allows the cluster policy to call kyverno-notation-aws, a Kubernetes service, external to the Kyverno policy engine.

# Get TLS secrets that are used by Kyverno cluster policy to call the kyverno-notation-aws service
$ kubectl -n kyverno-notation-aws get secret kyverno-notation-aws-tls -o json | jq -r '.data."tls.crt"' | base64 -d && kubectl -n kyverno-notation-aws get secret kyverno-notation-aws-tls -o json | jq -r '.data."ca.crt"' | base64 -d

The updated Kyverno policy is seen below.

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: check-images     
spec:
  validationFailureAction: Enforce
  webhookTimeoutSeconds: 30
  rules:
  - name: call-aws-signer-extension
    match:
      any:
      - resources:
          namespaces:
          - test-notation
          kinds:
          - Pod
    context:
    - name: result
      apiCall:
        method: POST
        data:
        - key: images
          value: "{{ request.object.spec.[ephemeralContainers, initContainers, containers][].image }}"
        service:
          url: https://svc.kyverno-notation-aws/checkimages
          caBundle: |-
            -----BEGIN CERTIFICATE-----
            MIICiTCCAjCg...
            -----END CERTIFICATE-----
            -----BEGIN CERTIFICATE-----
            MIIBdzCCAR2g...
            -----END CERTIFICATE-----
    validate:
      message: "not allowed"
      deny:
        conditions:
          all:
          - key: "{{ result.verified }}"
            operator: EQUALS
            value: false

With our TrustPolicy and TrustStore resources, and the check-images cluster policy applied, you will configure the correct kyverno-notation-aws service account in the kyverno-notation-aws namespace to use IAM Roles for Service Accounts (IRSA). Using IRSA supplies IAM credentials based on an IAM role principal and policies to pods using the service account. These credentials are used for the following access:

  • Get region-specific Amazon ECR credentials for pulling container image signatures
  • Access AWS Signer APIs needed for the container image signature verification process

kyverno-notation-aws service account is already installed with the kyverno-notation-aws application, and you need to override the service account to use our desired IRSA configuration. The easiest way to accomplish this override is to use the following eksctl command.

# Create/Update IRSA
NAME=kyverno-notation-aws
NAMESPACE=kyverno-notation-aws
CLUSTER=kyverno-notary-uw2
ECR_POLICY=arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly
SIGNER_POLICY=arn:aws:iam::<AWS_ACCOUNT_ID>:policy/notary-admission-signer

eksctl create iamserviceaccount \
  --name $NAME \
  --namespace $NAMESPACE \
  --cluster $CLUSTER \
  --attach-policy-arn $ECR_POLICY \
  --attach-policy-arn $SIGNER_POLICY \
  --approve \
  --override-existing-serviceaccounts

Note:  The eksctl commands do not work with self-managed Kubernetes.

Once the kyverno-notation-aws service account is updated, you will delete the current kyverno-notation-aws pods; the new pods will pick up the AWS credentials from the newly configured kyverno-notation-aws service account and allow Kyverno to communicate with our Amazon ECR registries and the AWS Signer API.

# Delete the current pods that are using non-IRSA credentials
$ kubectl -n kyverno-notation-aws delete po kyverno-notation-aws-6545d654cd-6qfzj
pod "kyverno-notation-aws-6545d654cd-6qfzj" deleted

# Verify pods come back up and are ready
$ kubectl -n kyverno-notation-aws get po -w
NAME                                    READY   STATUS              RESTARTS   AGE
kyverno-notation-aws-6545d654cd-wjtpd   0/2     ContainerCreating   0          5s
kyverno-notation-aws-6545d654cd-wjtpd   2/2     Running             0          12s

Note:  As an alternative to overriding the existing kyverno-notation-aws service account, and having to delete and restart the existing kyverno-notation-aws pods, you could manually create the AWS IAM role and add the eks.amazonaws.com/role-arn annotation in the existing service account resource in the kyverno-notation-aws install.yaml file.

Once the new kyverno-notation-aws pods are running—with the new IRSA credentials—we can test our solution similarly to our earlier Notation CLI tests. You can apply known-good (valid signatures) and known-bad (invalid signatures) pods and deployments, to verify image signatures.

# Apply the test resources to test the container image signature validation policies
$ kubectl apply -f .
namespace/test-notation created
pod/notary-admit created
deployment.apps/test created
Error from server: error when creating "3-test-pod-bad.yaml": admission webhook "validate.kyverno.svc-fail" denied the request:

resource Pod/test-notation/notary-admit-bad was blocked due to the following policies

check-images:
  call-aws-signer-extension: |
    failed to check deny preconditions: failed to substitute variables in condition key: failed to resolve result.verified at path : failed to execute APICall: HTTP 500 Internal Server Error: failed to verify image <AWS_ACCOUNT_ID>.dkr.ecr.<AWS-_REGION>.amazonaws.com/pause:3.9: no signature is associated with "<AWS_ACCOUNT_ID>.dkr.ecr.<AWS_REGION>.amazonaws.com/pause@sha256:e58e7b93b7b41119528b189803971223eeccece0df6e2af3c2df9c81978c58cc", make sure the artifact was signed successfully

Error from server: error when creating "5-test-deploy-bad.yaml": admission webhook "validate.kyverno.svc-fail" denied the request:

resource Deployment/test-notation/test-bad was blocked due to the following policies

check-images:
  autogen-call-aws-signer-extension: |
    failed to check deny preconditions: failed to substitute variables in condition key: failed to resolve result.verified at path : failed to execute APICall: HTTP 500 Internal Server Error: failed to verify image <AWS_ACCOUNT_ID>.dkr.ecr.<AWS_REGION>.amazonaws.com/pause:3.9: no signature is associated with "<AWS_ACCOUNT_ID>.dkr.ecr.<AWS_REGION>.amazonaws.com/pause@sha256:e58e7b93b7b41119528b189803971223eeccece0df6e2af3c2df9c81978c58cc", make sure the artifact was signed successfully

As expected, our known good pods and deployments passed container image signature verification, while our known bad pods and deployments did not.

Note:  Amazon ECR supports multiple signatures for container images. During verification, if the Notation and AWS Signer solution finds multiple signatures, it will verify the image if any of the signatures passes validation with any of the signing profiles listed in the configured TrustPolicy. For example, if you have 10 profiles listed in in the TrustPolicy, and multiple signatures, then as long as one signature that passes validation checks is from one of those 10 profiles, the image verification will succeed.

Kyverno Auto-Gen Rules for Pod Controllers

In our check-images cluster policy, we only specified that pod resources would be validated. However, during testing we validated Deployment resources as well. How is that possible? Kyverno includes an Auto-Gen feature that creates policy rules for pod controllers, based on the supplied pod policy. After Kyverno modifies the existing policy with new rules, the newly auto-generated rules are found in the status element of the original cluster policy.

# The cluster policy was modified to include pod-controller rules
$ kubectl get cpol check-images -o=jsonpath='{.status.autogen.rules[0].match}'|jq .
{
  "any": [
    {
      "resources": {
        "kinds": [
          "DaemonSet",
          "Deployment",
          "Job",
          "StatefulSet",
          "ReplicaSet",
          "ReplicationController"
        ],
        "namespaces": [
          "test-notation"
        ]
      }
    }
  ],
  "resources": {}
}

As you can see, the check-images cluster policy has been updated with new rules through the Kyverno auto-gen feature to include controllers that create pods, which precludes the need to manually create and manage all the corresponding cluster policy rules.

Cleaning up

Follow these steps to clean up the resources we provisioned when they are no longer needed.

  1. Delete the Amazon EKS cluster
  2. Delete the AWS IAM roles and policies that we used for our IAM Roles for Service Accounts configuration
  3. Revoke the AWS Signer signing profile we created and used with the AWS CLI command: aws signer revoke-signing-profile
  4. Delete signature(s) from the Amazon ECR repository (see below).

As described above, Notation supports OCI 1.0 currently and uses an OCI Image Index to track its signatures in the registry. Amazon ECR does not allow deleting artifacts or images referred to by an OCI Image Index, so just deleting the signatures in the console will not be effective.

You can use the ORAS project’s oras client to delete signatures and other reference type artifacts. It has been implemented to first remove the reference from an index, and then delete the manifest. The oras manifest delete command can be used, referencing the index of the signature artifact. For example:

# Use oras CLI, with Amazon ECR Docker Credential Helper, to delete signature
$ oras manifest delete <AWS_ACCOUNT_ID>.dkr.ecr.<AWS_REGION>.amazonaws.com/pause@sha256:ca78e5f730f9a789ef8c63bb55275ac12dfb9e8099e6a0a64375d8a95ed501c4

As described above, once the OCI 1.1 Distribution specification is released, clients will no longer have to manage artifact references in OCI registries. Using the ORAS client as shown previously is only required when using OCI 1.0.

Note:  It is always a good security practice to remove outdated, unused, or invalid security resources or configurations.

Conclusion

In this post, we showed you the details of AWS Signer Container Image Signing, a new capability for signing and verifying your container images with a fully managed solution. Using the open source Notation client with the curated AWS Signer plugin, you can adopt a simple client-based workflow for signing and verifying your container images. We’ll continue to enhance and expand integrations for this feature, and we welcome your feedback. Please visit our Containers Roadmap on GitHub to check on progress, and please open an issue to tell us about changes you’d like us to work on next.