AWS Cloud Operations Blog

Launch a standardized DevOps pipeline to deploy containerized applications using AWS Service Catalog

As companies implement DevOps practices, they find that standardizing the deployment of the continuous integration and continuous deployment (CI/CD) pipelines is increasingly important. Many end users and developers do not have the ability or time to create their own CI/CD pipelines and processes from scratch for each new project. By using AWS Service Catalog, organizations can achieve consistent governance and drive compliance by standardizing the DevOps process.

In this blog post, we show you how to deploy an example pipeline solution that includes check-in code, automates the build process, including validation tests, and then deploys the code as a fully managed container service. The pipeline provides zero downtime updates to the published web application API upon successful code check-in. In addition, through the use of AWS Service Catalog, you can create and manage catalogs of approved IT services to realize benefits in terms of standardization, self-service capabilities, fine-grained access control, and version control.  You can find the templates used in this blog post in AWS Service Catalog reference architecture on GitHub.

DevOps pipeline solution to deploy containerized application

Figure 1: Solution architecture

Overview

This solution consists of three main parts:

  1. AWS Service Catalog provides self-service capability for end users and developers to quickly deploy the DevOps pipeline solution. End users and developers have access to deploy all the core components of the above DevOps pipeline solution with the benefit of infrastructure and AWS service details abstraction.
  2. The container code pipeline includes AWS CodePipeline and Amazon Elastic Container Registry (Amazon ECR). AWS CodePipeline is a fully managed continuous delivery service and Amazon ECR is a fully managed Docker container registry. This DevOps CI/CD deployment process is triggered by the check-in of application code. It automates the build, validation, and packaged copy of the Docker containerized application code to Amazon ECR.
  3. As a final step of this pipeline, the Docker container is published to an Amazon ECS service running on a cluster created with a Fargate task. Amazon Elastic Container Service is a fully managed container orchestration service. AWS Fargate is a serverless compute engine for containers. In this sample solution, the Docker containerized application code is a simple web API. When new versions of the Docker container are published, Amazon ECS is updated with zero downtime through rolling updates.

How do I deploy the CI/CD and container service based solution?

The GitHub repository contains AWS CloudFormation (CFN) templates and a README guide.   You can use the README guide and CFN templates as a general guide to set up the container CodePipeline solution.  The following steps can be used as a reference for a sample solution deployment.

  1. Set up an AWS Service Catalog portfolio in your AWS account.
  2. Deploy products from the portfolio (the CodePipeline container project, the Amazon ECS cluster, the ECS service) to build a pipeline. Check the code into the AWS CodeCommit repo.
  3. Verify the deployment of the sample web application.
  4. Validate zero downtime updates for a new version of the sample web application.
  5. Clean up.

Set up AWS Service Catalog portfolio in the AWS account

  1. Sign in to the AWS Management Console with administrator permissions. Choose the N. Virginia Region. Be sure to use this Region as you perform the steps in this post.
  2. On separate web browser tab, go to  GitHub repository.
  3. To get started, choose Launch Stack, which opens the AWS CloudFormation console where you can launch a nested stack.
  4. In the AWS CloudFormation console, on the Specify template page, use the defaults, and then choose Next.
  5. On the Specify stack details page, use the defaults, and then choose Next.

Note: If this is the first time you are running a Service Catalog portfolio stack from this repo, set the CreateEndUsers parameter to Yes. (This parameter creates the ServiceCatalogEndusers group.) When you add the end user or developer IAM user to the ServiceCatalogEndusers group, they can launch products in this portfolio.

StackName SC-RA-ECS-Portfolio
PortfolioName Service Catalog Containers Reference Architecture
PortfolioProvider SC-RA-ECS-Portfolio
PortfolioDescription Service Catalog Portfolio that contains reference architecture products for ECS.
CreateEndUsers No
RepoRootURL https://s3.amazonaws.com/aws-service-catalog-reference-architectures/
  1. In Configure stack options, use the defaults, and then choose Next.
  2. On the Review page, under Capabilities, select the IAM resources and CAPABILITY_AUTO_EXPAND check boxes, and then choose Create stack.

The SC-RA-ECS-Portfolio CloudFormation nested stack creates a Service Catalog portfolio named Service Catalog Containers Reference Architecture in your AWS account. This portfolio has all of the Service Catalog products required for this solution including IAM roles, groups, and users for governance, security, and access permissions. This completes the administrative portion of the solution setup.

Deploy products from the AWS Service Catalog portfolio to build pipeline

Container CodePipeline Project

  1. Sign in to the AWS Management Console with developer permissions. Choose the N. Virginia Region. Be sure to use this Region as you perform the steps in this post.

Note: To access the products in the portfolio, the end user or developer must be part of the ServiceCatalogEndusers IAM group. Developer IAM permissions to other AWS services are also required for this example solution.

  1. In the AWS Service Catalog console, go to Products to see all the products in the Service Catalog Containers Reference Architecture.

Figure 2: Products list

  1. Choose Container CodePipeline Project, and then choose Launch product.
  2. On the Launch page, you can keep the automatically generated provisioned product name or enter your own (for example, mypipeline).
  3. Use the defaults in the Parameters, Manage tags, and Enable event notifications sections.
  4. Choose Launch product, and in the Events section, choose the record ID to view the output.
  5. When SUCCEEDED is displayed, make a note of the CloneUrlHttp value. You will use it in the next step.

Check code into the AWS CodeCommit Repo

AWS CodeCommit is a fully managed source control service that hosts Git-based repositories. It is supported by most integrated development environments (IDEs) and development tools. The AWS CodeCommit User Guide provides methods for setting up users and connections. For this sample solution, we used AWS Cloud9, a serverless, preconfigured, and preauthenticated IDE with direct access to AWS services.

  1. In the AWS Service Catalog console, go to the Products list, choose AWS Cloud9 IDE, and then choose Launch product.

Note: The AWS Cloud9 IDE is associated with Amazon Virtual Private Cloud (Amazon VPC) and must meet specific VPC requirements.

  1. On the Launch page, you can use the automatically generated provisioned product name or enter one of your own (for example, myide).
  2. In Parameters, use the defaults, except:

For LinkedRepoCloneUrl, use the value you copied in the previous step.

For LinkedRepoPath, enter /ETLTasks.

  1. Choose Launch product, and in the Events section, choose the record ID to view the output.
  2. Make a note of the Cloud9Url value.

Note: AWS Cloud9 clones repos locally and takes care of security and credentials to be used with AWS CodeCommit. The clone repos are already pre-mapped.

  1. From the AWS Cloud9 IDE terminal, run the following commands to copy the sample flask application C/Python code for this demo into the local repo.
cp -R aws-service-catalog-reference-architectures/labs/CalcAPI/* ETLTasks/
  1. Check the code into the AWS CodeCommit repository to start the AWS CodePipeline process to build, validate, and deploy the code as a Docker containerized web API service.
git add --all
git commit -m "initial commit"
git push
  1. To verify the application code was pushed successfully, go to the CodeCommit console. In the left navigation pane, expand Source, choose Code, and then check the ETLTasks repository.

Figure 3: ETLTasks repository in the CodeCommit console

ECS Cluster

  1. In the AWS Service Catalog, go to Product list, and choose ECS Fargate Cluster, and then choose Launch product.
  2. On the Launch page, you can use the automatically generated provisioned product name or enter your own (for example mycluster).  Use the defaults in the  Manage tags and Enable event notifications sections.
  3. Choose Launch product, and in the Events section, choose the record ID to view the output.

The cluster has been provisioned.

Amazon ECS Service

  1. In the Amazon ECR console, copy the URI of the etltest repository. You need it in the next step.
  2. In the AWS Service Catalog console, go to Products list, choose Amazon ECS Service, and then choose Launch product.

Note: You must have the AWSServiceRoleForECS service-linked role in the AWS account you’re using. If the role does not exist, it can be created by running the following command from the Cloud9 IDE console:

aws iam create-service-linked-role --aws-service-name ecs.amazonaws.com

  1. On the Launch page, enter a name for the service (for example, myservice). In Parameters, use the defaults except:

For ECRImageURI, paste the URI you copied in step 1 of this procedure.
For TaskDefinitionName, enter a name (for example, calc).
For ContainerPort, use 80.
For ContainerSize, use Small.

  1. Use the defaults in the Manage tags and Enable event notifications sections, choose Launch product, and in the Events section, use the record ID to view the output.

Verify Deployment of the Sample Web Application

  1. To verify the successful deployment of the Docker containerized web application go to Service Catalog console.
  2. From the Provisioned products list, choose the cluster name (for example, mycluster).
  3. In the Events section, choose the record ID to view the output. Make a note of the external URL.
  4. Open the external URL in a web browser and view the webpage. You should see the following:

Figure 4: External URL displays “I’m alive!” along with the version 3.1

You’ve now completed one full cycle: checked in code, which triggered the pipeline, and spun up an ECS cluster running an ECS service hosting Docker container.

Validate Zero Downtime Updates for New Versions of the sample web application

  1. In the AWS Service Catalog, in the Provisioned product list, choose the cluster name (for example, mycluster), and make a note of the ClusterName value in the Events section. You will use this value in the following steps.
  2. In the Provisioned products list, choose the Amazon ECS Service name (for example, myservice) and make a note of the FargateService value in the Events section. The service name is the last value of the URL (for example, calc).
  3. In the Provisioned product list, choose myide to view output for Cloud9Url and go to the Cloud9 IDE.
  4. Use the values for ClusterName and FargateService to update the buildspec-deploy.yml:
post_build:
commands:
- echo Pushing the Docker image to ECR...
- Docker push $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME
- aws ecs update-service --service calc --cluster SC-798471071174-pp-yd2ges6sdfw24-ECSCluster-KvvgXqebExXr --force-new-deployment
  1. Update the application.py line as follows:
@application.route('/', methods=['GET'])
def index():
    return "<h1>I'm alive! Completed 2nd cycle!</h2><p>version 3.2</p>"
  1. To check in changes to the pipeline, run the following commands from Cloud9 IDE terminal.
cd ~/environment/ETLTasks
git add --all
git commit -m "version 3.2"
git push
  1. Open the AWS CodePipeline console and check for the new code (version 3.2).

Figure 5: ETL-Container-ProductPipeline in the CodePipeline console.

  1. Open the external URL in a web browser and view the webpage. You should see the following:

Figure 6: External URL displays “I’m alive! Completed 2nd Cycle!”

At this point, you’ve completed two full cycles: checked in code, which has gone through then pipeline (built, validated and deployed to ECR), spun up the ECS cluster running the ECS service hosting our container. The container is running a real-time web API service on the internet. In this cycle, you updated the containerized web application with zero downtime through rolling updates.

Cleanup

To avoid ongoing charges, complete the following steps to delete the resources provisioned in this post.

  1. As the end user or developer, from the AWS Service Catalog console, go to the Provisioned products list to terminate the provisioned resources. Terminate the products one at a time, in reverse order of provisioning.
    • myservice
    • mycluster
    • myide
    • mypipeline

To terminate mypipeline, delete the contents of the S3 bucket that was created as part of the mypipeline provisioning and delete the images in the Amazon ECR etltest repository.

  1. For each product, from Actions, choose Terminate, type terminate, and then choose Terminate.
  2. In the IAM console, sign in as a user with administrator permissions, and remove the end user with developer permissions from the ServiceCatalogEndusers group.

Note:  For this step, delete any constraints and remove any users or groups associated with the portfolio who were been added outside of the CloudFromation template.

  1. Sign in to the AWS CloudFormation console with administrator permissions. Under Stacks, choose SC-RA-ECS-Portfolio, and then choose Delete to remove the nested stacks.

Conclusion

In this blog post, we showed how you can automate the deployment of containerized applications using a standardized DevOps pipeline. By using AWS Service Catalog, you can rapidly deploy, in hours rather than days, the pipeline solution while meeting your organization’s governance best practices and compliance requirements.

About the authors:

Ji Jung is a solutions architect focusing on AWS Marketplace, AWS Control Tower, and AWS Service Catalog. He is passionate about cloud technologies and building innovative solutions to help customers. When not working, he enjoys spending time with his family and playing sports.

 

 

 

Chris Chapman is a Partner solutions architect covering AWS Marketplace, AWS Service Catalog, and Control Tower. Chris was a software developer and data engineer for many years and now his core mission is helping customers and partners automate AWS infrastructure deployment and provisioning.