AWS Partner Network (APN) Blog

Architecting Microservices Using Weave Net and Amazon EC2 Container Service

In the past, it was far more common to have services that were relatively static. You might remember having a database server that existed on a fixed IP, and you probably had a static DNS mapping to that server. If this information changed, it could be a problem.

However, in the cloud we accept change as a constant, especially when building microservice architectures based on containers. Containers often live only for minutes or hours, and they may change hosts and ports each time they are spun up. When building containerized microservices, a major component of the architecture includes underlying “plumbing” that allows components to communicate across many hosts in a cluster. Generally, the tools allowing this communication also help us deal with the dynamic nature of containers by automatically tracking and replicating information about changes across the cluster. These tools provide functionality commonly known as “service discovery.” Today, we’re going to talk about a popular service discovery option, Weave Net, by APN Technology Partner Weaveworks.

We’re fans of Weave Net for a few reasons, but the most attractive thing about Weave Net is its simplicity. Weave Net allows you to reference containers by name, as you would reference them locally with Docker links, across the entire cluster. That’s it.

Weave Net is also unique among service discovery options because it doesn’t use an external database or a consensus algorithm, but instead uses a gossip protocol to share grouped updates about changes in the cluster. This has interesting implications for partition tolerance, where availability is prioritized over consistency.

Let’s talk CAP theorem!

In a world where the network is unreliable (read: the one that we live in), your distributed system can either be highly available and partition tolerant, or highly available and consistent, but not highly available, highly consistent, and partition tolerant. Service discovery options that use consensus algorithms prioritize consistency, so in network partition events they must sacrifice availability to avoid having inconsistent cluster state.

Weave Net, on the other hand, uses a data structure called a CRDT where nodes in the cluster only make local updates, and replication happens via merged updates to the rest of the cluster. This means that the data structure is eventually consistent, but the lack of a strong consistency requirement allows it to be extremely fast and highly available. (See the fast data path discussion on the Weaveworks website.) It’s important to consider the needs of your system, and if your application prioritizes availability or doesn’t require strong consistency, then Weave Net is an excellent service discovery option.

Microservice design with Weave Net and Amazon ECS

Within Amazon EC2 Container Service (Amazon ECS), you can use task definitions to deploy containers to your ECS cluster. A task definition can include one or many containers, but the key concept to remember is that when you run a task definition, all the containers in that task get placed on a single instance. Ask yourself, “Does my application require a specific combination of containers to run together on the same host?” You might say, “Yes, this is necessary,” if your containers need to share a local Amazon Elastic Block Store (Amazon EBS) volume, for example.

You might also answer in the affirmative if your containers need to be linked. However, Weave Net gives you more flexibility when designing your application. Because Weave Net automatically handles container communication across the cluster, you have a bit more freedom when building task definitions. You’re freed from needing to use local Docker links, so you don’t have to place all the containers that make up your application on the same instance.

As a result, you can define a single container per task definition. Consider our sample application: a Python Flask application on the front end and Redis on the back end. Because these components have different scheduling and scaling requirements, we should manage them independently by building a separate task definition for each container. This one-to-one model is far simpler to think about and to manage than multiple services defined in a single task definition.

Let’s unpack our sample application architecture a little bit. Our front-end application is a simple Python hit counter application. From line 6 we see that we’re building the connection object for our Redis back end. We know that we are going to name the Redis container redis, so we can write our code with this in mind. Weave Net will take care of the rest.

Now we need to build two ECS task definitions. These definitions are really straightforward, since we’re adding only a single container to each task.

Here’s the back-end Redis task (notice that my container name is redis):

{
    "ContainerDefinitions": [
        {
            "Essential": true,
            "Name": "redis",
            "Image": "redis",
            "Cpu": 10,
            "Memory": 300
        }
    ],
    "Volumes": []
}

Here’s the front-end hit counter task:

{
    "ContainerDefinitions": [
        {
            "PortMappings": [
                {
                    "HostPort": 80,
                    "ContainerPort": 5000
                }
             ],
             "Essential": true,
             "Name": "hit-counter",
             "Image": "errordeveloper/hit-counter",
             "Command": [
                 "python",
                 "app.py"
             ],
             "Cpu": 10,
             "Memory": 300
        }
    ],
    "Volumes": []
}

Next, to scale these components individually, we’ll take these task definitions and wrap a higher-level ECS scheduling construct, known as a service, around them. A service lets us do things like define how many tasks we want running in the cluster, scale the number of active tasks, automatically register the containers to an ELB, and maintain a quota of healthy containers. In our architecture, which has one container per task and one task per service, we can use the service scheduler to determine how many Redis containers and hit counter applications we’d like to run. If we run more than one container per service, Weave Net automatically handles load balancing among the containers in the service via “round robin” DNS responses.

The end result is an ECS cluster that has three container instances and two services—one front-end hit-counter service scaled to three tasks, and a back-end Redis service with one task running.

$ aws ecs describe-clusters --cluster WeaveSKO-EcsCluster-
1USVF4UXK0IET --region eu-west-1
{
    "clusters": [
        {
            "status": "ACTIVE",
            "clusterName": "WeaveSKO-EcsCluster-1USVF4UXK0IET",
            "registeredContainerInstancesCount": 3,
            "pendingTasksCount": 0,
            "runningTasksCount": 4,
            "activeServicesCount": 2,
            "clusterArn": "arn:aws:ecs:eu-west-1:<account-
id>:cluster/WeaveSKO-EcsCluster-1USVF4UXK0IET"
        }

        $ aws ecs list-services --cluster arn:aws:ecs:eu-west-
1:<account-id>:cluster/WeaveSKO-EcsCluster-1USVF4UXK0IET --region 
eu-west-1
        {
            "serviceArns": [
                "arn:aws:ecs:eu-west-1:<account-
id>:service/WeaveSKO-EcsBackendDataService-1KTM3UFB3LKIO",
                "arn:aws:ecs:eu-west-1:<account-
id>:service/WeaveSKO-EcsFrontendAppService-1A5RQTSV7LMWE"
            ] 

Weave Net technical deep dive

How does Weave Net provide so much flexibility when you’re designing microservices with no configuration?

Weave Net starts by implementing an overlay network between cluster hosts. Each host has a network bridge, and containers are connected to the bridge with a virtual Ethernet pair, upon which they are assigned important information like an IP and netmask.

Weave Net discovers peers by querying the Auto Scaling API, so you don’t need to configure the overlay network yourself. On each host, Weave Net also runs a component called the Docker API proxy, which sits between the Docker daemon and the Docker client to record events on the host, like a container starting or stopping. The proxy also handles automatic registration of new containers to the overlay bridge.

Weave Net builds an in-memory database of local configuration information that it records from local Docker activity and chooses a random subset of peers with which to exchange topology information (using the gossip protocol). When it’s time to forward communications between hosts, packets are encapsulated in a tunnel header and forwarded to the Linux kernel. The Weave Net router communicates with the Linux kernel’s Open vSwitch module to tell the kernel how to process packets. This approach allows packets to progress straight from the user’s application to the kernel, avoiding a costly context switch from user space to kernel space.

These features result in greatly reduced complexity when designing microservices. Weave Net also provides other niceties like multicast support and round-robin DNS for container lookups. Take a look as we run a dig query against the hit-counter service. You can see that the query returns the three nodes that run the service in random order:

$ docker run 2opremio/weaveecsdemo dig +short hit-counter
10.32.0.3
10.36.0.3
10.40.0.3

$ docker run 2opremio/weaveecsdemo dig +short hit-counter
10.40.0.3
10.32.0.3
10.36.0.3

Weave Scope

The last thing I’d like to point out about Weaveworks is the useful Weave Scope component. Weave Scope is a monitoring and troubleshooting tool that provides a birds-eye view of the cluster, and it can be run either as a hosted/SaaS tool or locally on your cluster instances. It displays all the containers in the system and the relationships among them:

We can also drill down into a specific container to see vital telemetry information:

Lastly, we can open a shell directly into the container. In this screen illustration, we’re taking a look at the output of env inside the container:

Conclusion

When you build microservice-based applications, deciding how individual microservices will communicate with one another requires a lot of thought. Weave Net greatly reduces the complexity involved in standing up microservices and increases flexibility in design patterns. In this blog post, we’ve explored Weave Net features and described how you might use them to design an application in Amazon ECS.

If you’d like to take a look at Weave Net and Weave Scope yourself, you can stand up your own copy of the example architecture from this post by using the following AWS CloudFormation template, which was developed and is maintained by Weaveworks: https://s3.amazonaws.com/weaveworks-cfn-public/integrations/ecs-baseline.json. Keep in mind that this template will spin up resources that could cost you money. To learn more about Weaveworks, visit their website at http://weave.works or take a look at their technical deep dive.