AWS for Industries

Deploy infrastructure for telecom workloads in an air-gapped AWS environment

Overview

Cloud solutions for enterprise applications are focused on creating a virtualized infrastructure to enable the management of various functions necessary to deliver services. With cloud computing, these functions become decoupled from the hardware and are operated from a horizontal platform as cloud-native or virtual functions. This is a unified and standardized platform that supports the deployment and management of cloud-native applications. Amazon Elastic Kubernetes Service (Amazon EKS) provides the Kubernetes platform for cloud-native application deployment on AWS.

A top priority of enterprise application deployment is security. A general principle of deployment is to minimize going to the internet for add-ons and other packages. With Amazon EKS supporting private endpoints, now you can create and access your EKS clusters through private endpoints and avoid exposing the APIs to the internet and the need for public NAT gateways and internet gateways. You can make your VPCs “air-gapped” meaning that is logically isolated from other systems and networks. It has no connectivity to the internet.

Here is how Amazon EKS can be configured for a completely air-gapped environment:

1. Isolated VPC:

  • Create a dedicated VPC: Make sure the VPC is isolated from other VPCs in your AWS account.
  • Restrict internet access: Block all outbound internet traffic from the VPC using security groups or network ACLs.
  • Disable public IP addresses: Prevent any instances in the VPC from having public IP addresses.

2. Private EKS cluster:

  • Create a private cluster: Configure the EKS cluster to be private, which means it can only be accessed from within the VPC.
  • Use private endpoints: Create private endpoints for the Amazon EKS API server and control plane components. This allows access to the cluster without exposing it to the public internet.

3. Internal DNS:

  • Implement a private DNS: Use Amazon Route 53 Private Hosted Zone to provide DNS resolution within the VPC, which prevents any reliance on public DNS servers.

4. Secure add-ons and container Images:

  • Private container registries: Store container images in a private container registry within the VPC to prevent unauthorized access.
  • Secure add-ons: Only use add-ons that are compatible with air-gapped environments and can be deployed without external dependencies.

To accelerate and standardize the deployment and orchestration of cloud-native applications, you need a repeatable and reusable framework and a set of packaging standards for common reusable add-ons. This framework defines the guideline for infrastructure setup for enterprises in air-gapped environments. This post dives deeper into design patterns that empower you to set up EKS clusters, worker nodes, and install add-ons and Container Network Interfaces (CNIs) within secure, air-gapped deployments. These patterns can help you harness the power of cloud computing while maintaining the highest levels of security.

Why do you need an air-gapped environment?

The decision to adopt EKS private clusters depends on several factors: specific use cases and security requirements, overall architectural design and compliance mandates, and the willingness to manage the increased complexity associated with private cluster deployments. Managing private EKS clusters involves complexities such as setting up and configuring private networking components, establishing secure communication channels for cluster management, implementing advanced access controls and monitoring mechanisms, and adhering to a well-defined governance framework.

To effectively tackle these complexities, you must use infrastructure as Code (IaC) tools for consistent and repeatable deployments. Additionally, by implementing automated processes for cluster lifecycle management (including updates and upgrades) and considering managed services such as AWS Control Tower and AWS CloudFormation Stack Sets, you can streamline the management of multiple private EKS clusters across different environments or accounts.

For the security-conscious telecom industry, where adherence to strict compliance regulations and tight integration with internal networks are crucial, EKS private clusters offer a compelling solution. By providing robust security, streamlining compliance efforts, and simplifying internal network integration, private EKS clusters can address the unique challenges faced by telecom environments. However, adopting private clusters requires the organization to have the necessary resources and expertise to manage the associated complexities effectively.

Solution architecture

In this architecture we have a central Continuous Integration and Continuous Delivery (CI/CD) account that is used for automation development, centrally manages the external add-ons, and has connectivity to the internet. Furthermore, we have one or more target deployment environments, such as dev, test, and prod. This is your workload account, and this environment is air-gapped, in other words it has no internet connectivity.

Prerequisites

The following prerequisites are necessary to complete this solution:

  1. For EKS private cluster, you need a Source VPC (Public Internet access) and a Target VPC (no outbound Internet access). The target VPC will not be able to access the internet through source as VPC peering is not transitive.
  2. To create a VPC in the source account, refer to VPC Template. This VPC has internet access with public NATGWs (Nat gateways).
  3. To create a Private VPC for EKS private cluster in the Target workload account refer to Private VPC Template. This VPC doesn’t have public NATGWs (Nat gateways) or IGWs (internet gateways), hence no internet connectivity.
  4. To establish a connection between the Source and Target VPCs, you can have VPC peering or an AWS Transit Gateway attachment.
  5. If you want to use VPC peering for connectivity between the VPCs, then refer to VPC peering.
  6. If you want to use AWS Transit Gateway for vpc connectivity, then refer to VPC Transit Gateway attachment.

Figure 1 Networking setup for CICD and Workload AccountsFigure 1 Networking setup for CICD and Workload Accounts

Secure add-on management for air-gapped cluster on Amazon EKS

Cloud deployments on Amazon EKS use add-ons to extend Kubernetes functionality. Specific add-ons might be open source and unavailable within your air-gapped EKS cluster due to security restrictions. The following is a secure approach to manage these add-ons:

Leveraging CI/CD for secure add-on management:

  1. Download from public repositories: While your EKS cluster is isolated, your CI/CD pipeline can have controlled internet access. Use this access to download the required Docker images from public repositories.
  2. Security screening: Before introducing them into your air-gapped environment, meticulously scan the downloaded Docker images for vulnerabilities using industry-standard tools integrated within your CI/CD pipeline.
  3. Private Amazon ECR repository: Relying on public repositories within the cluster introduces unnecessary risk. Instead, host the security-approved Docker images in a private Amazon Elastic Container Registry (Amazon ECR) repository. This grants you granular control over image versions and access within your VPC.
  4. Helm chart for streamlined deployment: Develop a Helm chart to manage the installation and configuration of these add-ons. Helm simplifies deployment and makes sure of consistency across your Telco workloads.
  5. Version control for controlled rollouts: Store the Helm chart in a version control system like Git, accessible by your central CI/CD account. This enables version control and allows for controlled rollouts of updates to your workload clusters.

By following these steps, you can securely integrate essential add-ons into your air-gapped cloud deployments on Amazon EKS. This approach balances the functionality offered by add-ons with the paramount security of your isolated environment. Refer to the repo for the Multus add-on reference.

End-to-end infrastructure deployment

The following steps outline the end-to-end infrastructure deployment.

Step 1: Workload account infrastructure deployment

Set up VPC Private Endpoints to connect to the AWS services privately. These endpoints are a combination of Gateway VPC endpoints (Amazon S3 and Amazon DynamoDB) and Interface VPC endpoints for other services as listed in the following diagram.

Figure 2 Workload account infrastructure deploymentFigure 2 Workload account infrastructure deployment

In the workload account, deploy the VPC endpoints as follows:

Figure 3VPC Endpoints for the AWS Services deployed in the workload accountFigure 3VPC Endpoints for the AWS Services deployed in the workload account

Once the networking and VPC endpoints are configured, deploy your EKS cluster using eksctl commands.

Step 2: Setup EKS private cluster with eksctl

The following prerequisites are needed to complete this step:

  • eksctl installed and configured: Follow the official Amazon EKS user guide for the setup.
  • An AWS account with appropriate IAM permissions: Make sure you have the necessary permissions to create EKS clusters and VPC resources.
  • In this setup you deploy the EKS cluster from a bastion host or AWS Cloud9 in the target account.
  • Follow the steps in the guide to deploy an EKS private cluster in the target account

After the successful deployment of the EKS cluster, check the EKS cluster in the AWS console, as shown in the following figure.

Figure 4 EKS cluster setup with Private endpoint accessFigure 4 EKS cluster setup with Private endpoint access

Step 3: Deploy nodegroup and add-ons

Deploy the self-managed nodegroup using CloudFormation and apply the AWS auth configmap to enable the communication between the EKS cluster and your nodegroup. Refer to the steps in the GitHub.

You can validate your nodegroups from a bastion node, running kubectl commands such as the following:

kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
ip-10-4-1-122.eu-central-1.compute.internal Ready <none> 102d v1.21.14-eks-48e63af 10.4.1.122 <none> Amazon Linux 2 5.4.231-137.341.amzn2.x86_64 docker://20.10.17


kubectl get pods -o wide -n kube-system
NAME                       READY   STATUS    RESTARTS   AGE    IP           NODE                                          NOMINATED NODE   READINESS GATES
aws-node-vnb2f             2/2     Running   0          102d   10.4.1.122   ip-10-4-1-122.eu-central-1.compute.internal   <none>           <none>
coredns-xxxxxxx-l9wg7   1/1     Running   0          110d   10.4.1.96    ip-10-4-1-122.eu-central-1.compute.internal   <none>           <none>
coredns-xxxxxx-psr2p   1/1     Running   0          110d   10.4.1.45    ip-10-4-1-122.eu-central-1.compute.internal   <none>           <none>
kube-multus-ds-6pfgn       1/1     Running   0          71d    10.4.1.122   ip-10-4-1-122.eu-central-1.compute.internal   <none>           <none>
kube-proxy-x47lq           1/1     Running   0          102d   10.4.1.122   ip-10-4-1-122.eu-central-1.compute.internal   <none>           <none

Now that your EKS cluster and nodegroup are ready, deploy the add-ons from the CI/CD account.

Refer to the deploy private images instructions in the repository.

After adding the add-ons in your cluster such as Multus, verify the installation by running commands such as the following:

kubectl describe daemonsets.apps -n kube-system kube-multus-ds
Name:           kube-multus-ds
Selector:       name=multus
Node-Selector:  <none>
Labels:         app=multus
                name=multus
                tier=node
Annotations:    deprecated.daemonset.template.generation: 2
Desired Number of Nodes Scheduled: 1
Current Number of Nodes Scheduled: 1
Number of Nodes Scheduled with Up-to-date Pods: 1
Number of Nodes Scheduled with Available Pods: 1
Number of Nodes Misscheduled: 0
Pods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:           app=multus
                    name=multus
                    tier=node
  Service Account:  multus
  Init Containers:
   install-multus-binary:
    Image:      xxxxxx.dkr.ecr.eu-central-1.amazonaws.com/cicd/multus-cni:latest
    Port:       <none>
    Host Port:  <none>
    Command:
      cp
      /usr/src/multus-cni/bin/multus-shim
      /host/opt/cni/bin/multus-shim
    Requests:
      cpu:        10m
      memory:     15Mi
    Environment:  <none>
    Mounts:
      /host/opt/cni/bin from cnibin (rw)
  Containers:
   kube-multus:
    Image:      xxxxxx.dkr.ecr.eu-central-1.amazonaws.com/cicd/multus-cni:latest
    Port:       <none>
    Host Port:  <none>
    Command:
      /usr/src/multus-cni/bin/multus-daemon
    Limits:
      cpu:     100m
      memory:  50Mi
    Requests:
      cpu:        100m
      memory:     50Mi
    Environment:  <none>
    Mounts:
      /etc/cni/net.d/multus.d from multus-daemon-config (ro)
      /host/etc/cni/net.d from cni (rw)
      /host/run from host-run (rw)
      /hostroot from hostroot (rw)
      /run/k8s.cni.cncf.io from host-run-k8s-cni-cncf-io (rw)
      /run/netns from host-run-netns (rw)
      /var/lib/cni/multus from host-var-lib-cni-multus (rw)
      /var/lib/kubelet from host-var-lib-kubelet (rw)
  Volumes:
   cni:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/cni/net.d
    HostPathType:  
   cnibin:
    Type:          HostPath (bare host directory volume)
    Path:          /opt/cni/bin
    HostPathType:  
   hostroot:
    Type:          HostPath (bare host directory volume)
    Path:          /
    HostPathType:  
   multus-daemon-config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      multus-daemon-config
    Optional:  false
   host-run:
    Type:          HostPath (bare host directory volume)
    Path:          /run
    HostPathType:  
   host-var-lib-cni-multus:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/cni/multus
    HostPathType:  
   host-var-lib-kubelet:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/kubelet
    HostPathType:  
   host-run-k8s-cni-cncf-io:
    Type:          HostPath (bare host directory volume)
    Path:          /run/k8s.cni.cncf.io
    HostPathType:  
   host-run-netns:
    Type:          HostPath (bare host directory volume)
    Path:          /run/netns/
    HostPathType:  
Events:            <none>

Cleaning up

Go to CloudFormation from AWS console and delete the Node group and Amazon EKS stack.

Figure 5 EKS cluster and Nodegroup CloudFormation stacks in Air-gapped environmentFigure 5 EKS cluster and Nodegroup CloudFormation stacks in Air-gapped environment

Delete the CloudFormation stack for the VPC endpoints.

Figure 6 VPC endpoint CloudFormation stack in Air-gapped environment

Conclusion

This post presented the steps to deploy Telco workloads in the AWS Cloud in an account with no internet connectivity. You can achieve security, controlled access, and the benefits of using private endpoints. Furthermore, you can use the additional Kubernetes add-ons required for the Telco deployments, manage it centrally, and share it with workload accounts.

You can also extend these patterns to your existing deployments. Leave your thoughts and questions in the comments section, we would love to hear your feedback. Reach out to your AWS account teams and Partner Solution Architects to learn more about 5G and AWS for Telecom.

Raghvendra Singh

Raghvendra Singh

Raghvendra Singh is a Principal Portfolio Manager and Telco Network Transformation specialist in AWS. He specializes in AWS infrastructure, containerization and Networking, helping users accelerate their modernization journey on AWS.

Sujata Roy Chowdhury

Sujata Roy Chowdhury

Sujata Roy Chowdhury is a DevOps Consultant in AWS. She specializes in Infrastructure deployment, security, CI/CD, providing support to the customer to onboard them in AWS.

Viyoma Sachdeva

Viyoma Sachdeva

Viyoma Sachdeva is a Principal Industry Specialist in AWS. She is specialized in AWS DevOps, containerization and IoT helping Customer’s accelerate their journey to AWS Cloud.