Containers
Authenticating with Docker Hub for AWS Container Services
Docker Hub has recently updated its terms of service to introduce rate limits for container image pulls. While these limits don’t apply to accounts under a Pro or Team plan, anonymous users are limited to 100 pulls per 6 hours per IP address, and authenticated free accounts are limited to 200 pulls per 6 hours. In this post, you will learn how to authenticate with Docker Hub to pull images from private repositories using both Amazon ECS and Amazon EKS to avoid operational disruptions as a result of the newly imposed limits and control access to your private container images. If you are not already using Docker Hub, you may consider Amazon Elastic Container Registry (Amazon ECR) as a fully managed alternative with native integrations to your AWS Cloud environment.
Docker Hub authentication with Amazon ECS
Amazon Elastic Container Service (Amazon ECS) is a fully managed container orchestration service that enables you to specify the container images you want to run as part of your application in a resource called a task definition. You can store your Docker Hub username and password as a secret in AWS Secrets Manager, and leverage integration with AWS Key Management Service (AWS KMS) to encrypt that secret with a unique data key that is protected by an AWS KMS customer master key (CMK). You can then reference the secret in your task definition and assign the appropriate permission to retrieve and decrypt the secret by creating a task execution role in AWS Identity and Access Management (IAM).
Solution overview:
The diagram below is a high-level illustration of the solution covered in this post to authenticate with Docker Hub using Amazon ECS.
By following the steps in this section of the post, you will create:
- A customer master key and an alias in AWS KMS to encrypt your secret
- A secret in AWS Secrets Manager to store your Docker Hub username and password
- An ECS task execution role to give your task permission to decrypt and retrieve your secret
- An ECS cluster and VPC resources using the Amazon ECS CLI
- An Amazon ECS service running one instance of a task on your cluster using the AWS Fargate launch type
Prerequisites:
For this solution, you should have the following prerequisites:
- An AWS account
- The AWS CLI
- The Amazon ECS CLI
- A Docker Hub account with a private repository
Push an image to a private Docker Hub repository (optional):
If you want to follow the specific configurations of this post, you can pull the official Docker build for NGINX, tag the image with the name of your private repository, and push it to your Docker Hub account. Replace the <USER_NAME>
variable with your Docker Hub username, the <REPO_NAME>
variable with the name of your private repository, and the <TAG_NAME>
variable with the tag you used.
docker pull nginx
docker tag nginx:latest <USER_NAME>/<REPO_NAME>:<TAG_NAME>
docker push <USER_NAME>/<REPO_NAME>:<TAG_NAME>
Otherwise, feel free to use the Docker image of your choice, but note that you may need to make some minor changes to the commands and configurations used in this post.
Create an AWS KMS CMK and Alias:
Start by creating a customer master key (CMK) and an alias in AWS KMS using the AWS CLI. This CMK will be leveraged by AWS Secrets Manager to perform envelope encryption on the unique data key it uses to encrypt your individual secrets. An alias acts as a display name for your CMK and is easier to remember than the key ID. An alias can also help simplify your applications. For example, if you use an alias in your code, you can change the underlying CMK that your code uses by associating the given alias with a different CMK.
aws kms create-key --query KeyMetadata.Arn --output text
The Amazon Resource Name (ARN) of the newly created key should be displayed as the output of the previous command. Replace the <CMK_ARN>
variable with that ARN and the <CMK_ALIAS>
variable with the alias you with to use:
aws kms create-alias --alias-name alias/<CMK_ALIAS> --target-key-id <CMK_ARN>
You will also need the ARN of the CMK when creating a trust policy document in an upcoming step.
Creating a secret in AWS Secrets Manager:
At this point you can proceed to create a secret in AWS Secrets Manager to securely store your Docker Hub username and password. Replace the <USER_NAME>
variable with your Docker Hub username, the <PASSWORD>
variable with your Docker Hub password, and <CMK_ALIAS>
variable with the alias of your CMK from the previous step. We also recommend naming secrets in a hierarchical manner to make them easier to manage. Note that the secret name in the following command is prepended with a dev/
prefix; this stores your secret in a virtual dev folder:
aws secretsmanager create-secret \
--name dev/DockerHubSecret \
--description "Docker Hub Secret" \
--kms-key-id alias/<CMK_ALIAS> \
--secret-string '{"username":"<USER_NAME>","password":"<PASSWORD>"}'
The ARN of the secret should be displayed as the output of the previous command. You will need to reference this ARN when creating a trust policy document in an upcoming step.
Create a task execution role in IAM:
First you will need to create a trust policy document to specify the principal that can assume the role, which in this case is an ECS task:
cat << EOF > ecs-trust-policy.json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "ecs-tasks.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
EOF
Next, create a permission policy document that allows the ECS task to decrypt and retrieve the secret created in AWS Secrets Manager. Replace the <SECRET_ARN>
and <CMK_ARN>
variables with the ARNs of the secret and CMK created in previous steps:
cat << EOF > ecs-secret-permission.json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"kms:Decrypt",
"secretsmanager:GetSecretValue"
],
"Resource": [
"<SECRET_ARN>",
"<CMK_ARN>"
]
}
]
}
EOF
You can now create the ECS task execution role using the AWS CLI. Note that you are referencing the trust policy document created in a previous step. Modify the directory path as needed to properly locate the file:
aws iam create-role \
--role-name ecsTaskExecutionRole \
--assume-role-policy-document file://ecs-trust-policy.json
To add foundational permissions to other AWS service resources that are required to run Amazon ECS tasks, attach the AWS managed ECS task execution role policy to the newly created role:
aws iam attach-role-policy \
--role-name ecsTaskExecutionRole \
--policy-arn arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy
Finally, add an inline permission policy allowing your task to retrieve your Docker Hub username and password from AWS Secrets Manager. Note that you are referencing the permission policy document created in a previous step. Modify the directory path as needed to properly locate the file:
aws iam put-role-policy \
--role-name ecsTaskExecutionRole \
--policy-name ECS-SecretsManager-Permission \
--policy-document file://ecs-secret-permission.json
Configure the ECS CLI (optional):
The Amazon ECS Command Line Interface (ESC CLI) provides high-level commands that simplify creating an Amazon ECS cluster and the AWS resources required to set it up. After installing the ECS CLI, you can optionally configure your AWS credentials in a named ECS profile using the ecs-cli configure profile command. Profiles are stored in the ~/.ecs/credentials
file.
ecs-cli configure profile \
--access-key <AWS_ACCESS_KEY_ID> \
--secret-key <AWS_SECRET_ACCESS_KEY> \
—profile-name <PROFILE_NAME>
You can also specify which profile to use by default with the ecs-cli configure profile default command. If you don’t configure an ECS profile or set environment variables, the default AWS profile stored in the ~/.aws/credentials
file will be used.
You can additionally configure the ECS cluster name, the default launch type, and the AWS Region to use with the ECS CLI with the ecs-cli configure command. The <LAUNCH_TYPE>
variable can be set to either FARGATE
or EC2
.
ecs-cli configure \
--cluster <CLUSTER_NAME> \
--default-launch-type <LAUNCH_TYPE> \
--config-name <CONFIG_NAME> \
--region <AWS_REGION>
These values can also be defined or overridden using the command flags specified in the following steps.
Create an Amazon ECS cluster:
Create an Amazon ECS cluster using the ecs-cli up command, specifying the cluster name you wish to use, the AWS Region to use (us-east-1
for example), and FARGATE
as the launch type:
ecs-cli up \
--cluster <CLUSTER_NAME> \
--region us-east-1 \
--launch-type FARGATE \
By using the FARGATE launch type, you are enlisting AWS Fargate to manage compute resources on your behalf so that you don’t need to provision your own EC2 container instances. By default, the ECS CLI will also launch an AWS CloudFormation stack to create a new VPC with an attached Internet Gateway, 2 public subnets, and a security group. You can also provide your own resources using flag options with the above command.
Configure the Security Group:
Once the ECS cluster has been successfully created, you should see the VPC and subnet IDs displayed in the terminal. Next, retrieve a JSON description of the newly created security group and make note of the security group ID or GroupId
. Replace the <VPC_ID>
variable with the ID of the newly created VPC.
aws ec2 describe-security-groups \
--filters Name=vpc-id,Values=<VPC_ID> \
--region us-east-1
Add an inbound rule to the security group allowing HTTP traffic from any IPv4 address. Replace the <SG_ID>
variable with the GroupId
retrieved in the previous step. This inbound rule will enable you to validate that the NGINX server is running in your task and that the private image has been successfully pulled from Docker Hub.
aws ec2 authorize-security-group-ingress \
--group-id <SG_ID> \
--protocol tcp \
--port 80 \
--cidr 0.0.0.0/0 \
--region us-east-1
Create an Amazon ECS service:
An Amazon ECS service enables you to run and maintain multiple instances of a task definition simultaneously. The ECS CLI allows you to create a service using a Docker compose file. Create the following docker-compose.yml
file, which defines a web
container that exposes port 80 for inbound traffic to the web server. To reference the NGINX image previously pushed to your private Docker Hub repository, replace the <USER_NAME>
variable with your Docker Hub username, the <REPO_NAME>
variable with the name of your private repository, and the <TAG_NAME>
variable with the tag you used.
cat << EOF > docker-compose.yml
version: "3"
services:
web:
image: <USER_NAME>/<REPO_NAME>:<TAG_NAME>
ports:
- 80:80
EOF
You will also need to create the following ecs-params.yml
file to specify additional parameters for your service specific to Amazon ECS. Note that the services
field bellow corresponds to the services
field in the Docker Compose file above, matching the name of the container to run. When the ECS CLI creates a task definition from the compose file, the fields of the web service will be merged into the ECS container definition, including the container image it will use and the Docker Hub repository credentials it will need to access it. Replace the <SECRET_ARN>
variable with the ARN of the AWS Secrets Manager secret you created earlier. Replace the <SUB_1_ID>
, <SUB_2_ID>
, and <SG_ID>
variables with the IDs of the 2 public subnets and the security group that were created with the ECS cluster.
cat << EOF > ecs-params.yml
version: 1
task_definition:
task_execution_role: ecsTaskExecutionRole
ecs_network_mode: awsvpc
task_size:
mem_limit: 0.5GB
cpu_limit: 256
services:
web:
repository_credentials:
credentials_parameter: "<SECRET_ARN>"
run_params:
network_configuration:
awsvpc_configuration:
subnets:
- "<SUB_1_ID>"
- "<SUB_2_ID>"
security_groups:
- "<SG_ID>"
assign_public_ip: ENABLED
EOF
Next, create the ECS service from your compose file using the ecs-cli compose service up command. This command will look for your docker-compose.yml
and ecs-params.yml
in the current directory. Replace the <CLUSTER_NAME>
variable with the name of your ECS cluster and the <PROJECT_NAME>
variable with the desired name of your ECS service.
ecs-cli compose \
--project-name <PROJECT_NAME> \
--cluster <CLUSTER_NAME> \
service up \
--launch-type FARGATE
You can now view the web container that is running in the service with ecs-cli compose service ps command.
ecs-cli compose \
--project-name <PROJECT_NAME> \
--cluster <CLUSTER_NAME> \
service ps
By navigating to the IP address listed on port 80 you should be able view the default NGINX welcome page, validating that your task was able to successfully pull the container image from your private Docker Hub repository using your credentials for authentication.
Cleanup:
Update the desired count of the service to0
and then delete the service using the ecs-cli compose service down command:
ecs-cli compose \
--project-name <PROJECT_NAME> \
--cluster <CLUSTER_NAME> \
service down
Delete the AWS CloudFormation stack that was created by ecs-cli up and the associated resources using the ecs-cli down command:
ecs-cli down --cluster <CLUSTER_NAME>
Docker Hub Authentication with Amazon EKS
Amazon Elastic Kubernetes Service (Amazon EKS) is a managed service that enables you to run Kubernetes on AWS without needing to install, operate, and maintain your own Kubernetes control plane or nodes. Kubernetes is an open-source system for automating the deployment, scaling, and management of containerized applications.
You can store your Docker Hub username and password as a Kubernetes secret stored in etcd, the highly available key value store used for all cluster data, and leverage integration with AWS Key Management Service (AWS KMS) to perform envelope encryption on that Secret with your own Customer Master Key (CMK). When Secrets are stored using the Kubernetes Secrets API, they are encrypted with a Kubernetes-generated data encryption key (DEK), which is then further encrypted using the CMK. You can then create a service account that references the secret and associate that service account with the pods you launch as part of a deployment, enabling the kubelet node agent to pull the private image from Docker Hub on behalf of the pods.
Solution Overview:
The diagram below is a high-level illustration of the solution covered in this post to authenticate with Docker Hub using Amazon EKS.
By following the steps in this section of the post, you will create:
- An Amazon EKS cluster with a managed node group of worker nodes
- A Docker Registry secret that is encrypted and stored in etcd
- A service account that serves as an identity for processes running in your pods and references the ImagePullSecret
- A deployment that declaratively specifies a ReplicaSet of pods to which the service account is associated.
- A LoadBalancer service that exposes the underlying pods behind the DNS endpoint of an Elastic Load Balancer
Prerequisites:
In addition to the prerequisites outlined in the previous section, you will also need:
- The eksctl command line interface tool for creating your EKS cluster
- The kubectl command line interface tool for creating and managing Kubernetes objects within your EKS cluster
For the purposes of this solution, you can continue use the official Docker build for NGINX that was pushed to your private repository in the previous section. Otherwise, feel free to use the Docker image of your choice, but be aware that you may need to make some minor changes to the commands and configurations used in this post.
You will also need a customer master key (CMK) with an associated alias in AWS KMS to perform envelope encryption on your Kubernetes secret. You can continue to use the CMK created in the previous section or create a new one.
Create an Amazon EKS cluster:
To get started, create a configuration file to use with eksctl, the official CLI for Amazon EKS. This configuration file specifies details about the Kubernetes cluster you want to create in Amazon EKS, as distinct from the default parameters that eksctl will use otherwise.
Note that, in addition to specifying the cluster name and region (us-east-1
), the file also specifies a managed node group, which automates the provisioning and lifecycle management of the Amazon EC2 instances that will act as your cluster’s worker nodes. These managed nodes will be provisioned as part of an Amazon EC2 Auto Scaling group that is managed for you by Amazon EKS.
The ARN of the CMK you created in AWS KMS is also referenced and will be used to encrypt the data encryption keys (DEK) generated by the Kubernetes API server in the EKS control plane.
cat << EOF > eks-dev-cluster.yaml
---
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: eks-dev
region: us-east-1
managedNodeGroups:
- name: eks-dev-nodegroup
desiredCapacity: 2
# KMS CMK for the EKS cluster to use when encrypting your Kubernetes secrets
secretsEncryption:
keyARN: <CMK_ARN>
EOF
You can retrieve the ARN of the CMK (CMK_ARN
) by specifying the <CMK_ALIAS>
in the following command:
aws kms describe-key --key-id alias/<CMK_ALIAS> | grep Arn
Next, use the eksctl create cluster command to initiate the creation of your Kubernetes cluster in Amazon EKS according to the specifications in the configuration file:
eksctl create cluster -f eks-dev-cluster.yaml
This command will launch an AWS CloudFormation stack under the hood to create a fully managed EKS control plane, a dedicated VPC, and two Amazon EC2 worker nodes using the official Amazon EKS AMI.
Create a new namespace:
It’s generally considered best practice to deploy your applications into namespaces other than kube-system
or default
to better manage the interaction between your pods, so create a dev
namespace in your cluster using the Kubernetes command-line tool, kubectl.
kubectl create ns dev
Create a Docker Registry secret:
Now, create a Docker Registry secret, replacing the <USER_NAME>
, <PASSWORD>
, and <EMAIL>
variables with your Docker Hub credentials.
kubectl create secret docker-registry docker-secret \
--docker-server="https://index.docker.io/v1/" \
--docker-username="<USER_NAME>" \
--docker-password="<PASSWORD>" \
--docker-email="<EMAIL>" \
--namespace="dev"
When you create this secret the Kubernetes API server in the EKS control plane generates a Data Encryption Key (DEK) locally and uses it to encrypt the plaintext payload in the secret. The Kubernetes API server then calls AWS KMS to encrypt the DEK with the CMK referenced in your cluster configuration file above and stores the DEK-encrypted secret in etcd. When a pod wants to use the secret, the API server reads the encrypted secret from etcd and decrypts the secret with the DEK.
Use the following command to verify that your secret was created.
kubectl get secrets docker-secret --namespace=dev
Create a service account:
Next, create a service account in the same dev
namespace to provide an identity for processes that will run in your pods.
kubectl apply -f - <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
name: dev-sa
namespace: dev
imagePullSecrets:
- name: docker-secret
EOF
The imagePullSecrets
field is used to pass the Docker Registry secret to the kubelet node agent, which uses this information to pull the private image from Docker Hub on behalf of your pod.
Verify the creation of the service account using the following command.
kubectl get sa dev-sa --namespace=dev
Create a deployment:
Now, create a configuration file that specifies the details of a deployment, which will create three replicated pods, each running a container built from the NGINX image stored in your private Docker Hub repository. Note that the service account created above is also referenced as part of the pod template specification. For the container image, replace the <USER_NAME>
variable with your Docker Hub username, the <REPO_NAME>
variable with the name of your private repository, and the <TAG_NAME>
variable with the tag you used. The image pull policy is set to Always
in order to force the kubelet to pull the image from Docker Hub each time it launches a new container rather than using a locally cached copy, requiring authentication with the Docker Registry secret created earlier.
cat <<EOF > nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
namespace: dev
labels:
app: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
serviceAccountName: dev-sa
containers:
- name: nginx
image: <USER_NAME>/<REPO_NAME>:<TAG_NAME>
imagePullPolicy: Always
ports:
- containerPort: 80
EOF
Apply the configuration file and create the deployment in your EKS cluster with the following command.
kubectl apply -f nginx-deployment.yaml
Create a LoadBalancer service:
Finally, provision an external LoadBalancer type service that exposes the pods of your deployment.
kubectl expose deployment nginx-deployment \
--namespace=dev \
--type=LoadBalancer \
--name=nginx-service
Get the DNS endpoint of the Elastic Load Balancer associated with your service.
kubectl get service/nginx-service --namespace=dev
Using your browser, navigate to the DNS endpoint specified in the EXTERNAL-IP
output field. Verify that you can view the default NGINX welcome page and that the pods in your deployment were able to successfully pull the container image from your Private Docker Hub repository using your credentials for authentication.
Cleanup:
Delete your service and the associated Elastic Load Balancer.
kubectl delete service nginx-service --namespace=dev
Use eksctl delete cluster command to delete your EKS cluster.
eksctl delete cluster eks-dev
Summary:
In this post, you created two clusters using both Amazon ECS and Amazon EKS, and configured them to pull a container image from a private Docker Hub repository. Integrations with AWS Key Management Service enable you to easily implement envelope encryption for your Docker Hub credentials. By authenticating with Docker Hub, you can avoid the newly introduced rate limits for container image pulls when using your Pro or Team plan, and private repositories help you maintain access control standards for sensitive container images.