AWS Startups Blog

How to Run Services Using Docker and Amazon EC2 Container Service

By Nate Slater, Solutions Architect, AWS

In the previous post, we took a detailed look at the architecture underpinning the Amazon EC2 Container Service. Now that we understand important concepts like scheduling, state management, and resource allocations, let’s see these in action by actually running a service in ECS.

The first step in building an application designed to run on ECS is to package the application code into one or more Docker containers. As discussed in the first post, Docker containers are based on images, and images are defined in Dockerfiles. A Dockerfile is a text file that describes how to “build” the image. For example, if we want to run a WordPress application in ECS, the Dockerfile might look like this:

FROM php:5.6-apache
RUN a2enmod rewrite
# install the PHP extensions we need
RUN apt-get update && apt-get install -y libpng12-dev libjpeg-dev && rm -rf /var/lib/apt/lists/* 
&& docker-php-ext-configure gd — with-png-dir=/usr — with-jpeg-dir=/usr 
&& docker-php-ext-install gd
RUN docker-php-ext-install mysqli
VOLUME /var/www/html
ENV WORDPRESS_VERSION 4.1.1
ENV WORDPRESS_UPSTREAM_VERSION 4.1.1
ENV WORDPRESS_SHA1 15d38fe6c73121a20e63ccd8070153b89b2de6a9
# upstream tarballs include ./wordpress/ so this gives us /usr/src/wordpress
RUN curl -o wordpress.tar.gz -SL https://wordpress.org/wordpress-${WORDPRESS_UPSTREAM_VERSION}.tar.gz 
&& echo “$WORDPRESS_SHA1 *wordpress.tar.gz” | sha1sum -c — 
&& tar -xzf wordpress.tar.gz -C /usr/src/ 
&& rm wordpress.tar.gz
COPY docker-entrypoint.sh /entrypoint.sh
# grr, ENTRYPOINT resets CMD now
ENTRYPOINT [“/entrypoint.sh”]
CMD [“apache2-foreground”]

The Dockerfile is used to build an image, which will be stored in a repository. Again, as discussed in the first post, this is accomplished by running the docker build command. In order for ECS to access the image, it must be put in a publicly accessible Docker image repository. All images in DockerHub are available to ECS by default. So, for the purposes of our example using the Dockerfile defined above, we’ll use the WordPress image built from that Dockerfile that lives in the DockerHub repo.

And that’s basically about all there is to packaging up the application code. If it can run in a Docker container, then it can run in ECS. This portability of Docker containers is quite powerful. We can build, test, and debug our code on any machine capable of running Docker (which is any machine with a Linux kernel). When the code is ready, we can package it up into a Docker image by building the image from a Dockerfile and storing it in a repository.

Now that we’ve packaged up our code to run in a Docker container, we need to provide the compute resources required to run containers. In ECS, this is called a cluster, and it consists of EC2 instances called “container instances” that are running the ECS agent. To create an ECS cluster of container instances, we simply launch one or more EC2 instances using the Amazon ECS-Optimized Amazon Linux AMI. Any EC2 instance launched from this AMI will be automatically placed in the “default” ECS cluster — every AWS account will have a “default” ECS cluster for each region the service runs in. If you want to launch the instance into a different ECS cluster, simply create the cluster using the ECS console or the following AWS CLI command (Note: for this post, I’m using the CLI because the console didn’t exist until recently. But all the CLI commands listed here can easily be executed via the GUI in the console):

$ aws ecs create-cluster — cluster-name WordPress

Launching EC2 container instances into the cluster is as simple as launching an instance through the EC2 console. The instance will need to be associated with an IAM role that allows the agent running on the instance to make the necessary API calls to ECS. Details are documented here.

When launching the EC2 container instances into this cluster, include the following user-data script in the “advanced” section of the “instance details” page when launching EC2 instances from the console:

#!/bin/bash echo ECS_CLUSTER=Wordpress >> /etc/ecs/ecs.config

When the EC2 instances launch, we should see that they are now associated with the “Wordpress” cluster:

$ aws ecs list-container-instances — cluster WordPress
{
    “containerInstanceArns”: [
        “arn:aws:ecs:us-west-2:xxxxx:container-instance/1ef890e2-a42f-4ed5-bff5–7b39edd66c9d”,
        “arn:aws:ecs:us-west-2:xxxxx:container-instance/315e5dbd-924b-4a86–9fa3–32ca3f7982b3”,
        “arn:aws:ecs:us-west-2:xxxxx:container-instance/52b5ef99-add7–4dc4-a7e3–49019e1b7c9e”,
        “arn:aws:ecs:us-west-2:xxxxx:container-instance/f00d41ad-043f-46c8–8437-cc4ea22aacf5”,
        “arn:aws:ecs:us-west-2:xxxxx:container-instance/f7486c80–4b5f-4ba4–94db-2e238406bcc9”
    ]
}

We now have an ECS cluster capable of running Docker containers. The next step is to tell ECS how to run the containers that comprise our WordPress application. To do this, we use an entity called a “task definition.” An ECS task definition can be thought of a prototype for running an actual task — for any given task definition, there can be zero or more task instances running in the cluster. The task definition allows for one or more containers to be specified. For tasks consisting of more than one container, the dependencies between containers are expressed in the task definition. For example, if we want to run WordPress, we’d need both the WordPress container (described above) as well as a MySQL container. The ECS task definition would look like this:

{
  “containerDefinitions”: [
    {
      “name”: “wordpress”,
      “links”: [
        “mysql”
      ],
      “image”: “wordpress”,
      “essential”: true,
      “portMappings”: [
        {
          “containerPort”: 80,
          “hostPort”: 80
        }
      ],
      “memory”: 500,
      “cpu”: 10
    },
    {
      “environment”: [
        {
          “name”: “MYSQL_ROOT_PASSWORD”,
          “value”: “password”
        }
      ],
      “name”: “mysql”,
      “image”: “mysql”,
      “cpu”: 10,
      “memory”: 500,
      “essential”: true
    }
 ],
 “family”: “wordpress”
}

The ECS documentation describes in detail all of the task definition parameters. However, the ones to note here are the image and links parameter. This task definition includes two containers, wordpress and mysql. The image parameter is used to specify the name of the image for each container in DockerHub. The links parameter is what tells ECS that the wordpress container has a network dependency on the mysql container. So instead of having to manage the two containers required to run our WordPress application individually, we can instead treat the entire application as a single task definition.

Let’s go ahead and register this new task definition by saving the JSON to a file called ecs-wordpress-task-def.json and running this command:

$ aws ecs register-task-definition --family wordpress --cli-input-json file://./ecs-wordpress-task-def.json

The above task definition is actually all we need to execute our WordPress application in ECS. However, ECS defines another entity called a “service,” which is useful for long-running tasks, like web applications. A service allows multiple instances of a task definition to be run simultaneously. It also provides integration with the Elastic Load Balancing service. For this example, we’re not using ELB because each WordPress service instance contains both the web layer and the database. For a production deployment, the database would be stored on some sort of persistent storage, and shared with all the instances of the web layer behind the ELB. For the purposes of example, though, it still makes sense to schedule our WordPress task definition as a service. But, if we were running a different type of application — perhaps a command-line app that does batch processing — all we would need is the task definition and we could schedule tasks in ECS directly from that.

To launch our WordPress application as an ECS service, we need a service definition like the following:

{     
       "cluster": "Wordpress",     
       "serviceName": "wordpress",     
       "taskDefinition": "wordpress:1",     
       "loadBalancers":[],     
       "desiredCount": 1,     
}

Let’s create the service using the following command:

$ aws ecs create-service --cluster WordPress --service-name wordpress --task-definition wordpress:1 --desired-count 1

Again, the service definition parameters are defined in detail in the ECS documentation. But note that the service definition is where load balancers can be specified, which is one of the primary reasons why long-running tasks like web applications should be launched in ECS as a service. In microservices architecture, each endpoint (or collection of related endpoints) can be defined as an ECS service, each managed independently from one another using different Docker images. In this scenario, ECS provides an extremely convenient way to deploy service endpoints.

If all goes well, we should now have a running instance of our WordPress application:

$ aws ecs list-tasks — cluster WordPress{
    “taskArns”: [
        “arn:aws:ecs:us-west-2:xxxxx:task/7af1a8c0-d199–47af-b05c-9d0496a9d97d”
    ]
}
$ aws ecs describe-tasks — cluster WordPress — tasks 7af1a8c0-d199–47af-b05c-9d0496a9d97d
{
    “failures”: [],
    “tasks”: [
        {
            “taskArn”: “arn:aws:ecs:us-west-2:xxxxx:task/7af1a8c0-d199–47af-b05c-9d0496a9d97d”,
            “overrides”: {
                “containerOverrides”: [
                    {
                        “name”: “mysql”
                    },
                    {
                        “name”: “wordpress”
                    }
                ]
             },
             “lastStatus”: “RUNNING”,
             “containerInstanceArn”: “arn:aws:ecs:us-west-2:xxxxx:container-instance/1ef890e2-a42f-4ed5-bff5–7b39edd66c9d”,
             “clusterArn”: “arn:aws:ecs:us-west-2:xxxxx:cluster/Wordpress”,
             “desiredStatus”: “RUNNING”,
             “taskDefinitionArn”: “arn:aws:ecs:us-west-2:xxxxxx:task-definition/wordpress:1”,
             “startedBy”: “ecs-svc/9223370607723201507”,
             “containers”: [
 {
                     “containerArn”: “arn:aws:ecs:us-west-2:xxxxx:container/0d2073be-88be-4f54-a8e1-ed27f4daf90d”,
                     “taskArn”: “arn:aws:ecs:us-west-2:xxxxx:task/7af1a8c0-d199–47af-b05c-9d0496a9d97d”,
                     “lastStatus”: “RUNNING”,
                     “name”: “mysql”,
                     “networkBindings”: []
                  },
                  {
                     “containerArn”: “arn:aws:ecs:us-west-2:xxxxx:container/83a63f47-b1ab-488e-87b7–923463c9072d”,
                     “taskArn”: “arn:aws:ecs:us-west-2:xxxxx:task/7af1a8c0-d199–47af-b05c-9d0496a9d97d”,
                     “lastStatus”: “RUNNING”,
                     “name”: “wordpress”,
                     “networkBindings”: [
                         {
                             “bindIP”: “0.0.0.0”,
                             “containerPort”: 80,
                             “hostPort”: 80
                         }
                     ]
                 }
             ]
         }
     ]
}

From the above JSON, we can determine where the task is running by looking at the “containerInstanceArn” parameter. We can use this to determine the specific EC2 instance running our application’s containers:

$ aws ecs describe-container-instances — cluster WordPress — container-instances 1ef890e2-a42f-4ed5-bff5–7b39edd66c9d
{
       “failures”: [],
       “containerInstances”: [
           {
               “status”: “ACTIVE”,
               “registeredResources”: [
                   {
                       “integerValue”: 4096,
                       “longValue”: 0,
                       “type”: “INTEGER”,
                       “name”: “CPU”,
                       “doubleValue”: 0.0
                   },
                   {
                       “integerValue”: 7483,
                       “longValue”: 0,
                       “type”: “INTEGER”,
                       “name”: “MEMORY”,
                       “doubleValue”: 0.0
                   },
                   {
                       “name”: “PORTS”,
                       “longValue”: 0,
                       “doubleValue”: 0.0,
                       “stringSetValue”: [
                           “2376”,
                           “22”,
                           “51678”,
                           “2375”
                       ],
                       “type”: “STRINGSET”,
                       “integerValue”: 0
                   }
               ],
               “ec2InstanceId”: “i-8224cc75”,
               “agentConnected”: true,
               “containerInstanceArn”: “arn:aws:ecs:us-west-2:xxxxx:container-instance/1ef890e2-a42f-4ed5-bff5–7b39edd66c9d”,
               “pendingTasksCount”: 0,
               “remainingResources”: [
                   {
                       “integerValue”: 4076,
                       “longValue”: 0,
                       “type”: “INTEGER”,
                       “name”: “CPU”,
                       “doubleValue”: 0.0
                   },
                   {
                       “integerValue”: 6483,
                       “longValue”: 0,
                       “type”: “INTEGER”,
                       “name”: “MEMORY”,
                       “doubleValue”: 0.0
                   },
                   {
                       “name”: “PORTS”,
                       “longValue”: 0,
                       “doubleValue”: 0.0,
                       “stringSetValue”: [
                           “2376”,
                           “22”,
                           “80”,
                           “51678”,
                           “2375”
                       ],
                       “type”: “STRINGSET”,
                       “integerValue”: 0
                   }
               ],
               “runningTasksCount”: 1
          }
     ]
}
$ aws ec2 describe-instances — filters Name=instance-id,Values=i-8224cc75 | jq ‘.Reservations[].Instances[] | {PublicDnsName}’
{
 “PublicDnsName”: “ec2–54–149–174–11.us-west-2.compute.amazonaws.com”
}

If we log into the EC2 instance, we should see our running Docker containers:

$ ssh -i ~/.ssh/id_myKeyPair ec2-user@ec2–54–149–174–11.us-west-2.compute.amazonaws.com
[ec2-user@ip-10–0–0–115 ~]$ docker ps
CONTAINER ID        IMAGE                           COMMAND CREATED             STATUS                          PORTS      NAMES
0f69a8ed2cf1        wordpress:4                    “/entrypoint.sh apac 32 minutes ago     Up 32 minutes     0.0.0.0:80->80/tcp ecs-wordpress-1-wordpress-94e3ffd5aafdb8df5300
edcd8fe51c21        mysql:5                        “/entrypoint.sh mysq 32 minutes ago     Up 32 minutes     3306/tcp      ecs-wordpress-1-mysql-ccdff1db88a4bed44b00
278fd30d86e5        amazon/amazon-ecs-agent:latest “/agent”       2 hours ago   Up 2 hours                127.0.0.1:51678->51678/tcp ecs-agent

Likewise, if the security group used to launch the EC2 cluster instances is set up to allow inbound access on port 80, we should be able to see our WordPress application running in the browser:

Running WordPress app

Conclusion

ECS is a sophisticated cluster management service that enables developers to harness the full power of Docker containers. Using ECS, engineers can take full advantage of the efficient development and test cycles made possible by the portability of Docker containers. Complex, distributed microservices architectures benefit from the isolation of the Docker execution environment. ECS allows distributed applications built using these architectures to be run in a clustered computing environment under full control of the customer, but with the full benefits of a managed service. In this post, we explored some of the architectural principals of container-based cluster computing. In later posts, we’ll further explore some of the other advantages of deploying applications in a clustered environment. Until then, we encourage you to explore ECS and Docker further. Have fun!