AWS Compute Blog
How to quickly setup an experimental environment to run containers on x86 and AWS Graviton2 based Amazon EC2 instances
This post is written by Kevin Jung, a Solution Architect with Global Accounts at Amazon Web Services.
AWS Graviton2 processors are custom designed by AWS using 64-bit Arm Neoverse cores. AWS offers the AWS Graviton2 processor in five new instance types – M6g, T4g, C6g, R6g, and X2gd. These instances are 20% lower cost and lead up to 40% better price performance versus comparable x86 based instances. This enables AWS to provide a number of options for customers to balance the need for instance flexibility and cost savings.
You may already be running your workload on x86 instances and looking to quickly experiment running your workload on Arm64 Graviton2. To help with the migration process, AWS provides the ability to quickly set up to build multiple architectures based Docker images using AWS Cloud9 and Amazon ECR and test your workloads on Graviton2. With multiple-architecture (multi-arch) image support in Amazon ECR, it’s now easy for you to build different images to support both on x86 and Arm64 from the same source and refer to them all by the same abstract manifest name.
This blog post demonstrates how to quickly set up an environment and experiment running your workload on Graviton2 instances to optimize compute cost.
Solution Overview
The goal of this solution is to build an environment to create multi-arch Docker images and validate them on both x86 and Arm64 Graviton2 based instances before going to production. The following diagram illustrates the proposed solution.
The steps in this solution are as follows:
- Create an AWS Cloud9
- Create a sample Node.js
- Create an Amazon ECR repository.
- Create a multi-arch image
- Create multi-arch images for x86 and Arm64 and push them to Amazon ECR repository.
- Test by running containers on x86 and Arm64 instances.
Creating an AWS Cloud9 IDE environment
We use the AWS Cloud9 IDE to build a Node.js application image. It is a convenient way to get access to a full development and build environment.
- Log into the AWS Management Console through your AWS account.
- Select AWS Region that is closest to you. We use us-west-2Region for this post.
- Search and select AWS Cloud9.
- Select Create environment. Name your environment mycloud9.
- Choose a small instance on Amazon Linux2 platform. These configuration steps are depicted in the following image.
- Review the settings and create the environment. AWS Cloud9 automatically creates and sets up a new Amazon EC2 instance in your account, and then automatically connects that new instance to the environment for you.
- When it comes up, customize the environment by closing the Welcome tab.
- Open a new terminal tab in the main work area, as shown in the following image.
- By default, your account has read and write access to the repositories in your Amazon ECR registry. However, your Cloud9 IDE requires permissions to make calls to the Amazon ECR API operations and to push images to your ECR repositories. Create an IAM role that has a permission to access Amazon ECR then attach it to your Cloud9 EC2 instance. For detail instructions, see IAM roles for Amazon EC2.
Creating a sample Node.js application and associated Dockerfile
Now that your AWS Cloud9 IDE environment is set up, you can proceed with the next step. You create a sample “Hello World” Node.js application that self-reports the processor architecture.
- In your Cloud9 IDE environment, create a new directory and name it multiarch. Save all files to this directory that you create in this section.
- On the menu bar, choose File, New File.
- Add the following content to the new file that describes application and dependencies.
{
"name": "multi-arch-app",
"version": "1.0.0",
"description": "Node.js on Docker"
}
- Choose File, Save As, Choose multiarch directory, and then save the file as json.
- On the menu bar (at the top of the AWS Cloud9 IDE), choose Window, New Terminal.
- In the terminal window, change directory to multiarch .
- Run
npm install
. It creates package-lock.json file, which is copied to your Docker image.
npm install
- Create a new file and add the following Node.js code that includes {process.arch} variable that self-reports the processor architecture. Save the file as js.
// Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. SPDX-License-Identifier: MIT-0
const http = require('http');
const port = 3000;
const server = http.createServer((req, res) => {
res.statusCode = 200;
res.setHeader('Content-Type', 'text/plain');
res.end(`Hello World! This web app is running on ${process.arch} processor architecture` );
});
server.listen(port, () => {
console.log(`Server running on ${process.arch} architecture.`);
});
- Create a Dockerfile in the same directory that instructs Docker how to build the Docker images.
FROM public.ecr.aws/amazonlinux/amazonlinux:2
WORKDIR /usr/src/app
COPY package*.json app.js ./
RUN curl -sL https://rpm.nodesource.com/setup_14.x | bash -
RUN yum -y install nodejs
RUN npm install
EXPOSE 3000
CMD ["node", "app.js"]
- Create .dockerignore. This prevents your local modules and debug logs from being copied onto your Docker image and possibly overwriting modules installed within your image.
node_modules
npm-debug.log
- You should now have the following 5 files created in your multiarch.
- .dockerignore
- app.js
- Dockerfile
- package-lock.json
- package.json
Creating an Amazon ECR repository
Next, create a private Amazon ECR repository where you push and store multi-arch images. Amazon ECR supports multi-architecture images including x86 and Arm64 that allows Docker to pull an image without needing to specify the correct architecture.
- Navigate to the Amazon ECR console.
- In the navigation pane, choose Repositories.
- On the Repositories page, choose Create repository.
- For Repository name, enter myrepo for your repository.
- Choose create repository.
Creating a multi-arch image builder
You can use the Docker Buildx CLI plug-in that extends the Docker command to transparently build multi-arch images, link them together with a manifest file, and push them all to Amazon ECR repository using a single command.
There are few ways to create multi-architecture images. I use the QEMU emulation to quickly create multi-arch images.
- Cloud9 environment has Docker installed by default and therefore you don’t need to install Docker. In your Cloud9 terminal, enter the following commands to download the latest Buildx binary release.
export DOCKER_BUILDKIT=1
docker build --platform=local -o . git://github.com/docker/buildx
mkdir -p ~/.docker/cli-plugins
mv buildx ~/.docker/cli-plugins/docker-buildx
chmod a+x ~/.docker/cli-plugins/docker-buildx
- Enter the following command to configure Buildx binary for different architecture. The following command installs emulators so that you can run and build containers for x86 and Arm64.
docker run --privileged --rm tonistiigi/binfmt --install all
- Check to see a list of build environment. If this is first time, you should only see the default builder.
docker buildx ls
- I recommend using new builder. Enter the following command to create a new builder named mybuild and switch to it to use it as default. The bootstrap flag ensures that the driver is running.
docker buildx create --name mybuild --use
docker buildx inspect --bootstrap
Creating multi-arch images for x86 and Arm64 and push them to Amazon ECR repository
Interpreted and bytecode-compiled languages such as Node.js tend to work without any code modification, unless they are pulling in binary extensions. In order to run a Node.js docker image on both x86 and Arm64, you must build images for those two architectures. Using Docker Buildx, you can build images for both x86 and Arm64 then push those container images to Amazon ECR at the same time.
- Login to your AWS Cloud9 terminal.
- Change directory to your multiarch.
- Enter the following command and set your AWS Region and AWS Account ID as environment variables to refer to your numeric AWS Account ID and the AWS Region where your registry endpoint is located.
AWS_ACCOUNT_ID=aws-account-id
AWS_REGION=us-west-2
- Authenticate your Docker client to your Amazon ECR registry so that you can use the docker push commands to push images to the repositories. Enter the following command to retrieve an authentication token and authenticate your Docker client to your Amazon ECR registry. For more information, see Private registry authentication.
aws ecr get-login-password --region ${AWS_REGION} | docker login --username AWS --password-stdin ${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_REGION}.amazonaws.com
- Validate your Docker client is authenticated to Amazon ECR successfully.
- Create your multi-arch images with the
docker buildx
. On your terminal window, enter the following command. This single command instructs Buildx to create images for x86 and Arm64 architecture, generate a multi-arch manifest and push all images to your myrepo Amazon ECR registry.
docker buildx build --platform linux/amd64,linux/arm64 --tag ${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_REGION}.amazonaws.com/myrepo:latest --push .
- Inspect the manifest and images created using docker
buildx imagetools
command.
docker buildx imagetools inspect ${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_REGION}.amazonaws.com/myrepo:latest
The multi-arch Docker images and manifest file are available on your Amazon ECR repository myrepo. You can use these images to test running your containerized workload on x86 and Arm64 Graviton2 instances.
Test by running containers on x86 and Arm64 Graviton2 instances
You can now test by running your Node.js application on x86 and Arm64 Graviton2 instances. The Docker engine on EC2 instances automatically detects the presence of the multi-arch Docker images on Amazon ECR and selects the right variant for the underlying architecture.
- Launch two EC2 instances. For more information on launching instances, see the Amazon EC2 documentation.
a. x86 – t3a.micro
b. Arm64 – t4g.micro - Your EC2 instances require permissions to make calls to the Amazon ECR API operations and to pull images from your Amazon ECR repositories. I recommend that you use an AWS role to allow the EC2 service to access Amazon ECR on your behalf. Use the same IAM role created for your Cloud9 and attach the role to both x86 and Arm64 instances.
- First, run the application on x86 instance followed by Arm64 Graviton instance. Connect to your x86 instance via SSH or EC2 Instance Connect.
- Update installed packages and install Docker with the following commands.
sudo yum update -y
sudo amazon-linux-extras install docker
sudo service docker start
sudo usermod -a -G docker ec2-user
- Log out and log back in again to pick up the new Docker group permissions. Enter docker info command and verify that the ec2-user can run Docker commands without sudo.
docker info
- Enter the following command and set your AWS Region and AWS Account ID as environment variables to refer to your numeric AWS Account ID and the AWS Region where your registry endpoint is located.
AWS_ACCOUNT_ID=aws-account-id
AWS_REGION=us-west-2
- Authenticate your Docker client to your ECR registry so that you can use the docker pull command to pull images from the repositories. Enter the following command to authenticate to your ECR repository. For more information, see Private registry authentication.
aws ecr get-login-password --region ${AWS_REGION} | docker login --username AWS --password-stdin ${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_REGION}.amazonaws.com
- Validate your Docker client is authenticated to Amazon ECR successfully.
- Pull the latest image using the docker pull command. Docker will automatically selects the correct platform version based on the CPU architecture.
docker pull ${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_REGION}.amazonaws.com/myrepo:latest
- Run the image in detached mode with the
docker run
command with -dp flag.
docker run -dp 80:3000 ${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_REGION}.amazonaws.com/myrepo:latest
- Open your browser to public IP address of your x86 instance and validate your application is running. {process.arch} variable in the application shows the processor architecture the container is running on. This step validates that the docker image runs successfully on x86 instance.
- Next, connect to your Arm64 Graviton2 instance and repeat steps 2 to 9 to install Docker, authenticate to Amazon ECR, and pull the latest image.
- Run the image in detached mode with the docker run command with -dp flag.
docker run -dp 80:3000 ${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_REGION}.amazonaws.com/myrepo:latest
- Open your browser to public IP address of your Arm64 Graviton2 instance and validate your application is running. This step validates that the Docker image runs successfully on Arm64 Graviton2 instance.
- We now create an Application Load Balancer. This allows you to control the distribution of traffic to your application between x86 and Arm64 instances.
- Refer to this document to create ALB and register both x86 and Arm64 as target instances. Enter my-alb for your Application Load Balancer name.
- Open your browser and point to your Load Balancer DNS name. Refresh to see the output switches between x86 and Graviton2 instances.
Cleaning up
To avoid incurring future charges, clean up the resources created as part of this post.
First, we delete Application Load Balancer.
- Open the Amazon EC2 Console.
- On the navigation pane, under Load Balancing, choose Load Balancers.
- Select your Load Balancer my-alb, and choose Actions, Delete.
- When prompted for confirmation, choose Yes, Delete.
Next, we delete x86 and Arm64 EC2 instances used for testing multi-arch Docker images.
- Open the Amazon EC2 Console.
- On the instance page, locate your x86 and Arm64 instances.
- Check both instances and choose Instance State, Terminate instance.
- When prompted for confirmation, choose Terminate.
Next, we delete the Amazon ECR repository and multi-arch Docker images.
- Open the Amazon ECR Console.
- From the navigation pane, choose Repositories.
- Select the repository myrepo and choose Delete.
- When prompted, enter delete, and choose Delete. All images in the repository are also deleted.
Finally, we delete the AWS Cloud9 IDE environment.
- Open your Cloud9 Environment.
- Select the environment named mycloud9and choose Delete. AWS Cloud9 also terminates the Amazon EC2 instance that was connected to that environment.
Conclusion
With Graviton2 instances, you can take advantage of 20% lower cost and up to 40% better price-performance over comparable x86-based instances. The container orchestration services on AWS, ECR and EKS, support Graviton2 instances, including mixed x86 and Arm64 clusters. Amazon ECR supports multi-arch images and Docker itself supports a full multiple architecture toolchain through its new Docker Buildx command.
To summarize, we created a simple environment to build multi-arch Docker images to run on x86 and Arm64. We stored them in Amazon ECR and then tested running on both x86 and Arm64 Graviton2 instances. We invite you to experiment with your own containerized workload on Graviton2 instances to optimize your cost and take advantage better price-performance.