AWS Open Source Blog

AWS Service Operator for Kubernetes Now Available ?

Kubernetes and AWS

NOTE: In mid-2019 we re-launched and intensified our efforts, deprecating and archiving the old code base of the AWS Service Operator and changing to a community-driven approach. We’re currently in the design phase and invite you to comment on the design issues and become a contributor to the new project, see details at the new GitHub repo aws/aws-service-operator-k8s. (update by Michael Hausenblas, 02/2020).

The AWS Service Operator is an open source project in developer preview which allows to you manage your AWS resources directly from Kubernetes using the standard Kubernetes CLI, kubectl. It does so by modeling AWS Services as Custom Resource Definitions (CRDs) in Kubernetes and applying those definitions to your cluster. This means that a developer can model their entire application architecture from container to ingress to AWS services, backing it from a single YAML manifest. We anticipate that the AWS Service Operator will help reduce the time it takes to create new applications, and assist in keeping applications in the desired state.

Have you ever tried to integrate Amazon DynamoDB with an application running in Kubernetes? How about deploying an S3 Bucket for your application to use? If you have, you will know this usually requires you to use some tool such as AWS CloudFormation or Hashicorp Terraform. Then you’ll need to create a way to deploy those resources, either as one-off components or in a pipeline using the appropriate toolkit. This causes disjointed experiences between Kubernetes and AWS, requiring you as an operator to manage and maintain this lifecycle. What if instead you could do this all using Kubernetes’ built-in control loop, storing a desired state within the API server for both the Kubernetes components and the AWS services needed? This is where the AWS Service Operator comes into play.

What is an Operator?

Kubernetes is built on top of what is called the controller pattern. This pattern allows applications and tools to listen to a central state manager (etcd), and take action when something happens. Examples of such applications include cloud-controller-manager, controller-manager, etc. The controller pattern allows us to create decoupled experiences and not have to worry about how other components are integrated. An operator is a purpose-built application that will manage a specific type of component using this same pattern. To learn about other operators, check out Awesome Operators.

AWS Service Operator

The AWS Service Operator as of today exposes a way to manage DynamoDB Tables, S3 Buckets, Amazon Elastic Container Registry (Amazon ECR) Repositories, SNS Topics, SQS Queues, and SNS Subscriptions, with many more integrations coming soon.

Request spot instances dialog box

Prerequisites

  1. Kubernetes Cluster – try using Amazon EKS or kops to set one up
  2. kubectl
  3. awscli

Getting Started

Before we can get the AWS Service Operator deployed, we need to set up a way to manage AWS IAM credentials to Kubernetes pods. For demo purposes, we’re going to grant those admin permissions to our Kubernetes worker nodes.

If you are using an Amazon Elastic Container Service for Kubernetes (EKS) cluster, you most likely provisioned this with CloudFormation. The command shown below will use the CloudFormation stacks to try to update the proper Instance Policy with the correct permissions.

aws iam attach-role-policy \
    --policy-arn arn:aws:iam::aws:policy/AdministratorAccess \
    --role-name $(aws cloudformation describe-stacks --stack-name ${STACK_NAME} | jq -r ".Stacks[0].Outputs[0].OutputValue" | sed -e 's/.*\///g')

Make sure to replace ${STACK_NAME} with the nodegroup stack name from the CloudFormation console.

You can now download the latest aws-service-operator.yaml Kubernetes manifest file and change a couple of values.

wget https://raw.githubusercontent.com/awslabs/aws-service-operator/master/configs/aws-service-operator.yaml

With the file downloaded locally, you need to make a few edits to include your environment information. On line #96-98 you will see <CLUSTER_NAME>, <REGION>, and <ACCOUNT_ID>. Change these values to values appropriate for your environment.

- --cluster-name=aws-service-operator-demos
- --region=us-west-2
- --account-id=000000000000

You can get your Account ID by using the aws sts get-caller-identity AWS CLI call. In the output you should see your Account ID.

After you have updated this file, apply it to your cluster.

$ kubectl apply -f aws-service-operator.yaml
namespace/aws-service-operator created
clusterrole.rbac.authorization.k8s.io/aws-service-operator created
serviceaccount/aws-service-operator created
clusterrolebinding.rbac.authorization.k8s.io/aws-service-operator created
deployment.apps/aws-service-operator created

This creates a new namespace aws-service-operator and deploys a Cluster Role, Service Account, and Role Binding, along with the deployment which manages the AWS resources.

Now you can check to see if the operator was deployed correctly by running

$ kubectl get customresourcedefinitions
NAME                                   CREATED AT
cloudformationtemplates.operator.aws   2018-09-27T21:30:10Z
dynamodbs.operator.aws                 2018-09-27T21:30:10Z
ecrrepositories.operator.aws           2018-09-27T21:30:10Z
s3buckets.operator.aws                 2018-09-27T21:30:10Z
snssubscriptions.operator.aws          2018-09-27T21:30:10Z
snstopics.operator.aws                 2018-09-27T21:30:10Z
sqsqueues.operator.aws                 2018-09-27T21:30:10Z

If you see all or more of these CRDs applied to your cluster, you are ready to start using those resources.

Deploy AWS Resources

For this example, we’re going to use the AWS Service Operator to create a DynamoDB table and deploy an application that uses the table after it’s been created.

Here’s an example of an Amazon DynamoDB Table manifest file:

apiVersion: service-operator.aws/v1alpha1
kind: DynamoDB
metadata:
  name: dynamo-table
spec:
  hashAttribute:
    name: name
    type: S
  rangeAttribute:
    name: created_at
    type: S
  readCapacityUnits: 5
  writeCapacityUnits: 5

As you can see, it’s using the service-operator.aws/v1alpha1 API Version, which is our custom API Version for the AWS Service Operator.

The entire manifest we’re going to deploy will include a Deployment, Service, and DynamoDB table. Now let’s apply this manifest to our cluster to see everything work.

kubectl apply -f https://gist.githubusercontent.com/christopherhein/1cb7da812a197ac3cc547ed2495faf9d/raw/4909e8ad6b35843e1696735e9f62301e9bde7ff9/dynamodb-app.yaml

This will apply this manifest to your cluster and create a new DynamoDB table that your applications can use. Next, list your DynamoDB tables that are managed by the AWS Service Operator. This command uses the -w (watch) flag, which will update inline when there are changes.

$ kubectl get dynamodb -o yaml -w
additionalResources:
  configMaps: null
apiVersion: service-operator.aws/v1alpha1
kind: DynamoDB
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"service-operator.aws/v1alpha1","kind":"DynamoDB","metadata":{"annotations":{},"name":"dynamo-table","namespace":"default"},"spec":{"hashAttribute":{"name":"name","type":"S"},"rangeAttribute":{"name":"created_at","type":"S"},"readCapacityUnits":5,"writeCapacityUnits":5}}
  clusterName: ""
  creationTimestamp: 2018-10-04T21:39:39Z
  generation: 1
  name: dynamo-table
  namespace: default
  resourceVersion: "145514"
  selfLink: /apis/service-operator.aws/v1alpha1/namespaces/default/dynamodbs/dynamo-table
  uid: ff66c9a0-c81d-11e8-ab31-02d2702eca80
output:
  tableARN: ""
  tableName: ""
spec:
  cloudFormationTemplateName: ""
  cloudFormationTemplateNamespace: ""
  hashAttribute:
    name: name
    type: S
  rangeAttribute:
    name: created_at
    type: S
  readCapacityUnits: 5
  rollbackCount: 0
  writeCapacityUnits: 5
status:
  resourceStatus: CREATE_IN_PROGRESS
  resourceStatusReason: ""
  stackID: arn:aws:cloudformation:us-west-2:XXXXXXXXXXXX:stack/aws-service-operator-dynamodb-dynamo-table-default/bf4e7670-c81c-11e8-9ca9-0a473bf201de

In this output you will see the status key which represents the status of the resource being created, as well as an output key which will update when the resource is provisioned. The final output of this command should be:

additionalResources:
  configMaps:
  - dynamo-table
apiVersion: service-operator.aws/v1alpha1
kind: DynamoDB
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"service-operator.aws/v1alpha1","kind":"DynamoDB","metadata":{"annotations":{},"name":"dynamo-table","namespace":"default"},"spec":{"hashAttribute":{"name":"name","type":"S"},"rangeAttribute":{"name":"created_at","type":"S"},"readCapacityUnits":5,"writeCapacityUnits":5}}
  clusterName: ""
  creationTimestamp: 2018-10-04T21:39:39Z
  generation: 1
  name: dynamo-table
  namespace: default
  resourceVersion: "145600"
  selfLink: /apis/service-operator.aws/v1alpha1/namespaces/default/dynamodbs/dynamo-table
  uid: ff66c9a0-c81d-11e8-ab31-02d2702eca80
output:
  tableARN: arn:aws:dynamodb:us-west-2:XXXXXXXXXXXX:table/dynamo-table
  tableName: dynamo-table
spec:
  cloudFormationTemplateName: ""
  cloudFormationTemplateNamespace: ""
  hashAttribute:
    name: name
    type: S
  rangeAttribute:
    name: created_at
    type: S
  readCapacityUnits: 5
  rollbackCount: 0
  writeCapacityUnits: 5
status:
  resourceStatus: CREATE_COMPLETE
  resourceStatusReason: ""
  stackID: arn:aws:cloudformation:us-west-2:XXXXXXXXXXXX:stack/aws-service-operator-dynamodb-dynamo-table-default/bf4e7670-c81c-11e8-9ca9-0a473bf201de

We can now get the Service’s load balancer endpoint by using kubectl get service -o wide which will allow us to load the application seeing the data that persists in DynamoDB.

$ kubectl get service -o wide
NAME         TYPE           CLUSTER-IP       EXTERNAL-IP                                                              PORT(S)                      AGE   SELECTOR
frontend     LoadBalancer   10.100.53.114    aff6aa155c81d11e8ab3102d2702eca8-294585320.us-west-2.elb.amazonaws.com   80:32200/TCP                 2m    app=frontend

Get Involved

Learn more about the AWS Service Operator project on GitHub, and get involved! Pull requests and issues are welcome. We’d love to hear what services you think should be implemented to make them easier to consume from within your Kubernetes cluster. Please reach out and let us know what you think.

Read more from Chris.

Chris Hein

Chris Hein

Chris Hein is a Sr. Developer Advocate for Kubernetes/EKS at Amazon Web Services. Before Amazon, Chris worked for a number of large and small companies like GoPro, Sproutling, & Mattel. Read More from Chris here https://aws.amazon.com/blogs/opensource/author/heichris/ and follow him at @christopherhein

Michael Hausenblas

Michael Hausenblas

Michael works in the AWS open source observability service team where he is a Solution Engineering Lead and owns the AWS Distro for OpenTelemetry (ADOT) from the product side.