AWS Open Source Blog

Provision AWS Services Through Kubernetes Using the AWS Service Broker

中文版

IMPORTANT NOTE – Oct 12, 2018

The steps described in this post are no longer accurate, please refer to the AWS Service Broker GitHub project for up-to-date installation instructions. We’ll be updating this post soon.


There’s no doubt that containers have changed how we build projects. One of the guiding principles of a containerized workflow approach has been to give back control to the developer, allowing them to choose their dependencies and how to consume them – most importantly, when they need them. Nowadays, no one can wait three weeks for an ops team to provision a database.

It’s no surprise, then, that the community needed to come up with a way to make sure that, no matter where your containers are run, you will always be able to control your external dependencies in a predictable and simple way. The solution: the Open Service Broker (OSB) API.

Today, I would like to introduce you to the AWS Service Broker, an implementation of the OSB API that will allow you to provision AWS services like RDS and EMR directly through any platform supporting the OSB API. Currently, that list includes Kubernetes, OpenShift, and Cloud Foundry.

We announced the AWS Service Broker at re:Invent 2017 with support for ten initial services. We added an additional eight services in April this year, and we continue to add support for more AWS services on a regular cadence.

The architecture behind the service broker approach in Kubernetes is pretty simple. Kubernetes has the Service Catalog project that will allow OSB compliant service brokers to register their list of available services with the catalog. Any user in the platform with the correct permissions can then make a request to the service catalog for any of the available service plans. The broker will provision the service and bind the returned information to a set of secrets.

AWS Service Broker

I’ve always felt that the best way to explain something is to show how it works. So, let’s jump straight in so you can go and try it yourself.

What you’ll need

There are a few things you will need in order to follow along with this blog post. I won’t be covering the installation or deployment of these dependencies, but there is a whole list of resources available online that will help you get these set up.

  • AWS Account with the ability to create IAM permissions
  • kops Cluster (Kubernetes v1.9.3)
  • Helm v2.9.0-rc5
  • AWS CLI v1.15.11
  • Python 2.7.13+

Install the Kubernetes Service Catalog

The Kubernetes Service Catalog is the mechanism through which all services are advertised to the Kubernetes platform. It is the Service Catalog which communicates with the AWS Service Broker when managing AWS Services. There are a variety of ways to install the Service Catalog; I personally find using Helm to be the simplest. The Service Catalog has a CLI called svcat that makes this process even easier.

Download the svcat CLI

This step will download the svcat CLI for Linux but it has a release for every major OS. For full installation instructions, take a look at the documentation here. If you are using Linux, you can run these commands:

curl -sLo svcat https://download.svcat.sh/cli/latest/linux/amd64/svcat
chmod +x svcat
sudo mv svcat /usr/local/bin
svcat install plugin

svcat CLI install

Add the Service Catalog chart repository to Helm and install Service Catalog

helm repo add svc-cat https://svc-catalog-charts.storage.googleapis.com
helm install svc-cat/catalog --name catalog --namespace catalog

To check whether the installation was successful, you can list the pods launched into the catalog namespace:

kubectl get pods --namespace=catalog

svc_list_pods_after_install

Permissions, Permissions

Now that you have the Kubernetes Service Catalog deployed, we need to make sure that the AWS Service Broker has the correct permissions to launch AWS Services into your AWS Account.

The AWS Service Broker can take permissions in one of two ways:

  • Statically configuring credentials in the config file (works well for on-prem deployments)
  • Follow the AWS SDK Credential Provider Chain (best practice when deployed on AWS)

The AWS Service Broker uses CloudFormation to manage the lifecycle of any resources created in your AWS account, so we need to create a role that CloudFormation will assume when a service is created.

Download the templates and definitions you will use during this walkthrough

curl -kLO https://s3.amazonaws.com/awsservicebroker/assets/blog-templates.tar.gz
mkdir blogtemplates
tar -xvf blog-templates.tar.gz -C blogtemplates
cd blogtemplates
aws iam create-policy --policy-name "aws-service-broker-cfn-deploy-policy" \
--policy-document file://cfn-deployment-policy.json

create_cfn_policy

Copy the value of the ARN; you will need it in a later step where I reference ${CFN_POLICY_ARN}

Create new role and attach the policies

In this section, we will create the CloudFormation role which will be assumed by the service broker and attach the newly created policy to it. We will also edit the kops config to add additional node roles.

aws iam create-role --role-name "aws-servicebroker-cfn-deploy-role" \
--assume-role-policy-document file://cfn-role-trust-rel.json \
--description "AWS Service Broker Deployment Role"

create_cfn_role

Copy down the role ARN. You will need this later where I reference ${CFN_ROLE_ARN}.

Now, attach the policy we created earlier to the new role:

aws iam attach-role-policy \
--role-name "aws-servicebroker-cfn-deploy-role" \
--policy-arn ${CFN_POLICY_ARN}

There will be no output from the CLI if it worked, so don’t expect anything to return if successful!

Edit kops cluster config with additional node permissions

We now need to edit the kops cluster configuration to add additional permissions to the kops deployed nodes. We do this using the kops CLI:

kops edit cluster ${CLUSTER_NAME}

This will open your $EDITOR to the kops cluster manifest file. In this file, under .spec, we’re going to add the following.

# ...
additionalPolicies:
    node: |
      [
       {
            "Action": [
                "cloudformation:CancelUpdateStack",
                "cloudformation:ContinueUpdateRollback",
                "cloudformation:CreateStack",
                "cloudformation:CreateUploadBucket",
                "cloudformation:DeleteStack",
                "cloudformation:DescribeAccountLimits",
                "cloudformation:DescribeStackEvents",
                "cloudformation:DescribeStackResource",
                "cloudformation:DescribeStackResources",
                "cloudformation:DescribeStacks",
                "cloudformation:GetStackPolicy",
                "cloudformation:ListStackResources",
                "cloudformation:ListStacks",
                "cloudformation:SetStackPolicy",
                "cloudformation:UpdateStack",
                "iam:AddUserToGroup",
                "iam:AttachUserPolicy",
                "iam:CreateAccessKey",
                "iam:CreatePolicy",
                "iam:CreatePolicyVersion",
                "iam:CreateUser",
                "iam:DeleteAccessKey",
                "iam:DeletePolicy",
                "iam:DeletePolicyVersion",
                "iam:DeleteRole",
                "iam:DeleteUser",
                "iam:DeleteUserPolicy",
                "iam:DetachUserPolicy",
                "iam:GetPolicy",
                "iam:GetPolicyVersion",
                "iam:GetUser",
                "iam:GetUserPolicy",
                "iam:ListAccessKeys",
                "iam:ListGroups",
                "iam:ListGroupsForUser",
                "iam:ListInstanceProfiles",
                "iam:ListPolicies",
                "iam:ListPolicyVersions",
                "iam:ListRoles",
                "iam:ListUserPolicies",
                "iam:ListUsers",
                "iam:PutUserPolicy",
                "iam:RemoveUserFromGroup",
                "iam:UpdateUser",
                "ec2:DescribeVpcs",
                "ec2:DescribeSubnets",
                "ec2:DescribeAvailabilityZones"
            ],
            "Resource": [
                "*"
            ],
            "Effect": "Allow"
        },
        {
            "Action": [
                "iam:PassRole"
            ],
            "Resource": [
                "arn:aws:iam::*:role/aws-servicebroker-cfn-deploy-role"
            ],
            "Effect": "Allow"
        },
        {
            "Action": [
                "ssm:GetParameters"
            ],
            "Resource": [
                "arn:aws:ssm:*:*:parameter/asb-access-key-id-*",
                "arn:aws:ssm:*:*:parameter/asb-secret-access-key-*"
            ],
            "Effect": "Allow"
        }
      ]

Inside the tarball you downloaded, there is an example of a complete config file saved as kops-config-example.yaml.
Save the file using the write to file command in your $EDITOR and then update the cluster:

kops update cluster ${CLUSTER_NAME} –yes

Once the update is done, confirm that the additional policy has been attached to the kops node role. You should now see a policy called additional.nodes.${CLUSTER_NAME}.

aws iam list-role-policies --role-name nodes.${CLUSTER_NAME}

Install the AWS Service Broker

To make it a little easier, we have created some scripts that will deploy the AWS Service Broker into your Kubernetes cluster. First, download the zip file:

curl -kLO https://s3.amazonaws.com/awsservicebroker/assets/aws-service-broker-install.tar.gz
mkdir awssb
tar -xvf aws-service-broker-install.tar.gz -C awssb
cd awssb

Inside this new folder you will find a YAML file called k8s-variables. Open the file and edit the following config mappings:

    • aws_cloudformation_role_arn: ${CFN_ROLE_ARN}
    • region: YOUR_REGION
    • vpc_id: VPC_IN_WHICH_KOPS_IS_RUNNING

Leave the rest of the config file as-is.

edit_variables

Now run the installer script.

chmod +x install_aws_service_broker.sh
./install_aws_service_broker.sh

When the installer completes, check that the AWS Service Broker pods are running and that the service has been created

kubectl get pods --namespace=aws-service-broker
kubectl get svc

list_sb_pods_svc

Confirm that the AWS Service Broker is registered with Service Catalog

Now that the AWS Service Broker is deployed and running, we can confirm that it has been registered with Service Catalog and see a list of services it makes available.

kubectl plugin svcat get brokers
kubectl plugin svcat get classes

list_broker_plans

Provision a new SQS queue

Let’s go ahead and provision a simple SQS queue to which we can later post messages. Create a file called provision-sqs.yaml with this content:

apiVersion: servicecatalog.k8s.io/v1beta1
kind: ServiceInstance
metadata:
  name: opensource-blog-sqs-demo
spec:
  clusterServiceClassExternalName: dh-sqs
  clusterServicePlanExternalName: standard

Now apply the changes using kubectl, and check whether the provisioning succeeded

kubectl apply -f provision-sqs.yaml
kubectl plugin svcat get instances

provision_sqs_queue

You can also confirm that the SQS queue has been created by using the AWS CLI

aws --region YOUR_REGION sqs list-queues --queue-name-prefix AWSServiceBroker

confirm_sqs_queue

Bind the provisioned service for use

Now that the service has been provisioned, we need to bind it so that we can get access to the queue. During the binding process, the broker will create a new set of secrets that you can consume in any pod in your cluster.

Create a file called sqs-demo-binding.yaml with this content:

apiVersion: servicecatalog.k8s.io/v1beta1
kind: ServiceBinding
metadata:
  name: os-blog-sqs-binding
spec:
  instanceRef:
    name: opensource-blog-sqs-demo

Now apply the changes using kubectl:

kubectl apply -f sqs-demo-binding.yaml

Let’s confirm that the binding was successful:

kubectl plugin svcat get bindings
kubectl plugin svcat describe binding os-blog-sqs-binding

There should now be a newly-created secret that contains all the information required to consume this service.

kubectl get secrets

confirm_sqs_binding

Attach the secret to any pod

Now that you have the bound secret, you can map it to any pod in your Kubernetes cluster like any other secret. The example below will map the QUEUE_URL and QUEUE_ARN environment variables inside a pod to the QueueURL and QueueARN keys in the os-blog-sqs-binding secret:

apiVersion: v1
kind: Pod
metadata:
  name: sqs-demo-app-pod
spec:
  containers:
  - name: psuedocontainer
    image: busybox
    env:
      - name: SQS_QUEUE_URL
        valueFrom:
          secretKeyRef:
            name: os-blog-sqs-binding
            key: QueueURL
      - name: SQS_QUEUE_ARN
        valueFrom:
          secretKeyRef:
            name: os-blog-sqs-binding
            key: QueueARN
  restartPolicy: Never

For more information on how the mapping of secrets in Kubernetes work, I suggest you read the official documentation here

And that’s all folks!

Hopefully, you now understand the workflow of provisioning a new AWS Service through Kubernetes using the AWS Service broker and how to consume it inside your application.

Keep an eye out on our Open Source Blog, we will be sharing some patterns we see our customers adopting, complete with sample applications and walkthroughs.

Mandus Momberg

Mandus Momberg

Mandus is a Partner Solutions Architect whose focus is on containers, devops and hybrid. He is an active member of the open source community with code commits to projects like Hadoop and Presto. If not at work, you can find Mandus in a pile of wiring, circuit boards, solder and small devices trying to make magic a real thing.