AWS for Industries

Telco Workload Placement and Configuration Updates with Lambda and inotify

With users and applications demanding high bandwidth along with low latencies, mobile service providers and application developers are working hard to meet demanding performance requirements. To deliver high bandwidth with low latencies, operators often need to deploy mobile Container Network Functions (CNFs) to deliver network connectivity services closer to the traffic location. Since a CNF needs a Kubernetes cluster in which it is deployed, this results in having multiple Kubernetes clusters in a Communication Service Providers (CSP) network.

The deployment of multiple Kubernetes clusters naturally requires a multi-cluster infra-management solution as well as a workload placement mechanism to the desired cluster. Amazon Web Services, Inc. (AWS) provides this multi-cluster infrastructure control plane management through the Amazon Elastic Kubernetes Service (Amazon EKS) master control plane, which monitors deployed EKS clusters and scales them as needed. The Amazon EKS dashboard provides a single pane of glass to manage multiple clusters, with each cluster exposing its own Kubernetes endpoint to which kubectl and Helm commands are directed.

For modularity and agility reasons, it is not desirable for business and network optimization applications to worry about infrastructure and deployment with kubectl and Helm commands. Instead, those applications should issue an API command with the desired cluster (or location) along with the function name and configured values. A common placement mechanism should manage deployment of workload to the desired cluster. Further, whenever a configuration of an application changes, it would be ideal for the business application to change the configuration file and let a common configuration mechanism take care of updating the configuration of the appropriate CNFs. This will also ensure that multiple business applications can access the same framework for application deployments and upgrades and will make the system modular with well-defined interface. Now the business applications won’t have to be updated with changes in the Container-as-a-Service (CaaS) platform and infrastructure as the API calls can be kept consistent.

In this blog, we will provide a mechanism for such workload placement and configuration update by using an API call that invokes an AWS Lambda (Lambda) function that is straightforward to use and yet flexible and reliable. Lambda is designed to provide high availability for both the service itself and the functions it operates. Using the inotify feature of Linux, we will create an updated docker image for Open5GS and use it to implement day 2 configuration changes. The proposed method is straightforward to implement and doesn’t require writing complex Kubernetes controllers.

Prerequisites

  1. An AWS account with admin privileges: For this blog, we will assume that you already have an AWS account with admin privileges and AWS Identity and Access Management (IAM) user credentials. This provides you the necessary permissions such as AWS Access Key ID and AWS Secret Access Key to access AWS account from your machine.
  2. Ability to download GitHub repo to your local machine to build images and to upload to your AWS account.
  3. Basic understanding of AWS services, such as AWS CloudFormation (CloudFormation), Amazon Virtual Private Cloud (Amazon VPC), Amazon EKS, AWS Lambda, Amazon API Gateway ( API Gateway ).
  4. Installed AWS Command Line Interface (AWS CLI) version v2, git, Amazon EKS and Helm in your local machine or in an AWS Cloud9 environment. If you are using AWS Cloud9 environment, please make sure that the AWS Cloud9 environment has sufficient permissions to create EKS clusters, Lambda functions and API Gateways.
  5. Please be aware that some services such as Amazon EKS used in this example incur a service charge.

Important Artifacts

In this implementation, we have chosen open-source Open5GS as a sample mobile application, but the method described here doesn’t depend on Open5GS. You could run the proposed technique of using Lambda and API Gateway for placement of other containerized applications and might also be able to modify most of the CNFs by using the inotify method. The following diagram represents the deployment architecture that will be used.

Deployed Architecture

Figure 1 High level overview of the solutions architecture

Figure 1 – High-level overview of the solution’s architecture

In the above, a business application will call an API (sample API code is provided later), that will be served by API gateway. This gateway is implemented to call a Lambda function whenever it is invoked. The Lambda function is created via a container image that is stored in ECR. The lambda function has logic to choose the right EKS cluster, get permission to access that cluster, and invoke helm deploy to launch the application specified by the business logic on the selected EKS cluster. This architecture has the advantage of isolating the business layer from the infrastructure and platform layer. You can access the repository containing artifacts needed for this blog from https://github.com/aws-samples/telco-workload-placement-via-lambda-inotify

Here we describe the unique features of this repository:
1. Build the required Open5GS container images

The steps to build the required container images can be found in the README file under the open5gs-docker-files folder that is part of the GitHub repo. It includes creating the Amazon Elastic Container Registry (Amazon ECR) repos, building/tagging and then finally pushing to your Amazon ECR repo. This step is needed to replace the container repos parameter that will be used inside the Helm values file. The open5gs-docker-files folder contains the additional inotify Linux script, this is further explained later in the blog.

2. Updated open-source Open5GS core with inotify feature

Linux inotify utility provides a mechanism for monitoring file system events. In this blog, we use it to detect changes in the values.yaml file for Open5GS CNFs configuration changes (changes are applied to the Amazon EKS cluster by using Lambda Helm update operations). This is done by creating a wrapper script auto-reload-open5gs.sh that controls both the startup and reload of the Open5GS CNF process when it detects a change in the configmap.

To add the inotify utility to Open5GS image, we used a docker image creation process with inotify included in the base UNIX kernel—the commands are as follows:

#!/usr/bin/bash

open5gs-${CNF_NAME}d -D -c /open5gs/config-map/${CNF_NAME}.yaml

oldcksum=$(cksum /open5gs/config-map/${CNF_NAME}.yaml)

inotifywait -e modify,move,create,delete -mr --timefmt '%d/%m/%y %H:%M' --format '%T' \

/open5gs/config-map/ | while read date time; do

    newcksum=$(cksum /open5gs/config-map/${CNF_NAME}.yaml)

    if [ "$newcksum" != "$oldcksum" ]; then

        echo "At ${time} on ${date}, config file update detected."

        oldcksum=$newcksum

        pkill open5gs-${CNF_NAME}d

        open5gs-${CNF_NAME}d -D -c /open5gs/config-map/${CNF_NAME}.yaml

    fi

The above script monitors if there are indeed changes in the config file when a Helm update command is called by the Lambda function. If it finds that the associated config file has changes, it stops the existing process (Open5GS does not currently support sending HUP signal to reload without stopping the daemon) and restarts it with a new config.

The auto-reload-open5gs.sh script is called in each CNF’s xxx-deploy.yaml by commands like the following:

containers:
      - name: amf
        image:

"{{ .Values.open5gs.image.repository }}:{{ .Values.open5gs.image.tag }}"
        imagePullPolicy: {{ .Values.open5gs.image.pullPolicy }}
        #command: ["open5gs-amfd", "-c", "/open5gs/config-map/amf.yaml"]
        command: ["/home/auto-reload-open5gs.sh"]
        env:
        - name: CNF_NAME
          value: amf
        volumeMounts:
        - name: {{ .Release.Name }}-amf-config
          mountPath: /open5gs/config-map/
      volumes:
        - name: {{ .Release.Name }}-amf-config
          configMap:
            name: {{ .Release.Name }}-amf-config
            defaultMode: 420

If the Open5GS had the ability to apply new config while running, then there would have been no need to stop the current process. For example, the pkill command would not be used in the inotify script as above. For CNFs that support the runtime config update comment out the pkill line in the image creation process.

3. Lambda function code and associated permissions

We create a custom Lambda function that can parse the trigger from API Gateway with various parameters required for deployment such as Helm chart version, name of application to be deployed, the name space. You can find Lambda code in the file trigger-script.sh. Since Lambda doesn’t support public Amazon ECR, and we will have to create our own private Amazon ECR repository for Lambda container images. In the trigger-script.sh files, we parse input from API Gateway, and then update kubeconfig for the desired cluster. We use /tmp/ for storing the kubeconfig, as well as the Helm installation directory, as the default locations are not writeable by Lambda containers. If the values file is provided by API Gateway by using an Amazon Simple Storage Service (Amazon S3) location, it is also copied to /tmp/folder and used. If the values file is not provided, default values.yaml is used while deploying Helm charts.

We have provided associated Lambda code in the git repository.

4. The following is a sample API formatted code that a business application can issue to deploy another desired application in a desired cluster:

{
  "location": "us-east-2",
  "chart_repository": "oci://$AWS_ACCOUNT_NUMBER.dkr.ecr.$AWS_REGION.amazonaws.com/open5gs-charts",
  "values_yaml_file": "s3://S3_BUCKET_NAME/values.yaml",
  "chart_version": "0.0.4",
  "function_name": "core5g-1",
  "run_namespace": "open5gs-1",
  "eks_cluster_name": "cluster-a"
}

Deployment

You can run this software in your AWS account or in an AWS Cloud9 environment. If you are using AWS Cloud9 environment, please make sure that the AWS Cloud9 environment has sufficient permissions to create Amazon EKS clusters, Lambda functions and API Gateways.

Step 1: Setup

Install Kubectx with the following commands:

wget https://github.com/ahmetb/kubectx/releases/download/v0.9.4/kubectx_v0.9.4_linux_x86_64.tar.gz

tar -xzf kubectx_v0.9.4_linux_x86_64.tar.gz

sudo mv kubectx /usr/local/bin/

Now export the Account Number and Region so you don’t have to type it in for every command:

sudo yum -y install jq

export AWS_ACCOUNT_NUMBER=$(aws sts get-caller-identity | jq -r '.Account')

export AWS_REGION=$(curl -s 169.254.169.254/latest/dynamic/instance-identity/document | jq -r '.region')

echo "export $AWS_ACCOUNT_NUMBER =${AWS_ACCOUNT_NUMBER}" | tee -a ~/.bash_profile

echo "export AWS_REGION=${AWS_REGION}" | tee -a ~/.bash_profile

echo $AWS_ACCOUNT_NUMBER

echo $AWS_REGION

Copy the repository to your local machine or AWS Cloud9 environment you are using for the lab by using Git clone.

git clone https://github.com/aws-samples/telco-workload-placement-via-lambda-inotify

Step 2: Create Amazon EKS Clusters and Node Groups

Since the focus of this blog is workload placement on Amazon EKS clusters, we will use default Amazon EKS commands to create two clusters. If your application requires a specific type of Amazon EKS clusters, you can use CloudFormation templates with specific parameters to create clusters.

Since Amazon EKS requires more than one Availability Zone (AZ), check the AZs in your selected region and then use a command similar to the following (this command is for us-east-2 region):

eksctl create cluster -n cluster-a --region us-east-2 --zones us-east-2a,us-east-2b --node-zones us-east-2a

eksctl create cluster -n cluster-b --region us-east-2 --zones us-east-2a,us-east-2b --node-zones us-east-2b

You can monitor the progress of the cluster creation in the CloudFormation console. An Amazon EKS cluster creation can take 20-30 minutes.

Once the cluster creation is complete, check the status by running following command:

aws eks list-clusters —region $AWS_REGION

Now update Kubeconfig for both clusters:

aws eks update-kubeconfig —name cluster-a —region $AWS_REGION

aws eks update-kubeconfig —name cluster-b —region $AWS_REGION

Verify the clusters are up and running.  To list the cluster arns simply run kubectx:

kubectx

Example output:

arn:aws:eks:us-east-2:<AWS_ACCOUNT_NO>:cluster/cluster-a

arn:aws:eks:us-east-2:<AWS_ACCOUNT_NO>:cluster/cluster-b

Switch between the clusters by using the context for the cluster as previously described. For example:

## switch to cluster-a cluster:

kubectx arn:aws:eks:us-east-2:082697638632:cluster/cluster-a

## To switch to cluster-b cluster:

kubectx arn:aws:eks:us-east-2:082697638632:cluster/cluster-b

Once you are in the right context, you can then use kubectl commands to check pods:

kubectl get pods -A

Step 3: Lambda Function Setup 

We are creating a custom Lambda function that has necessary roles and software installed to run Kubectl and Helm commands for both the clusters. This requires a container image for Lambda that we will hold in a private Amazon ECR repository.

Start by running the following commands to create appropriate roles and permissions:

cd telco-workload-placement-via-lambda-inotify/iam-policies

aws iam create-role --role-name lamda-orchestration-role --assume-role-policy-document file://lambda_assume_policy.json

Now assign a policy to the prior described Lambda role:

aws iam put-role-policy --role-name lamda-orchestration-role --policy-name lambda_orchestration_policy --policy-document file://lambda_orchestration_policy.json

aws iam attach-role-policy --role-name lamda-orchestration-role --policy-arn arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly

The Lambda function does the task of calling a Helm deploy to the desired cluster with the repository specified by the user.

The code for the Lambda function is in trigger-script.sh, the file that we will use to build the lambda container image. Please examine the downloaded files in the Lambda folder to see the supporting files needed for image and role creation.

We will create a private Amazon ECR repo to hold the Lambda container image, which will be used to create the Lambda function.

To create the repository and also make and push the Lambda container image to it, run the following code:

## Create ECR Repo

aws ecr create-repository --repository-name helm-trigger-function --region $AWS_REGION

## Change directory to telco-workload-placement-via-lambda-inotify/lambda/Dockerfiles folder

cd ../lambda/Dockerfiles/

## Build the lambda image and tag it

docker build -t $AWS_ACCOUNT_NUMBER.dkr.ecr.$AWS_REGION.amazonaws.com/helm-trigger-function:v0.0.1 .

## Login to ECR

aws ecr get-login-password --region $AWS_REGION | docker login --username AWS --password-stdin $AWS_ACCOUNT_NUMBER.dkr.ecr.$AWS_REGION.amazonaws.com

## Push the lambda image to ECR Repo

docker push $AWS_ACCOUNT_NUMBER.dkr.ecr.$AWS_REGION.amazonaws.com/helm-trigger-function:v0.0.1

Now to create the Lambda function:
5. Go to Lambda in the AWS Console
6. Select the Create a function button
7. Select Container Image from the authoring options
8. Function name: placeAndConfigureFunction
9. Container Image URI: select browse images, select helm-trigger-function as the Amazon ECR Image repository (the image that we just pushed) and then select the image as in the Figure 3. Click Select image.
10. Expand Change default execution role section, select Use an existing role, and choose the lambda-orchestration-role we had created earlier.
11. Click Create function

Figure 2 Creating Lambda function from a container image

Figure 2 – Creating Lambda function from a container image

Figure 3 - Selection of container image for the Lambda function

Figure 3 – Selection of container image for the Lambda function

Figure 4 - Integration of Lambda function with API gateway

Figure 4 – Integration of Lambda function with API gateway

Figure 5 - Update configuration of Lambda function

Figure 5 – Update configuration of Lambda function

12. Navigate to the Configuration tab and select Edit 

13. Update memory to 512 MB and timeout to 5 min. Then choose Save

Go to the AWS Cloud9 console.

Add the Lambda role to the Amazon EKS authentication for the two clusters that we created with the following code:

eksctl create iamidentitymapping --region $AWS_REGION --cluster cluster-a --arn arn:aws:iam::$AWS_ACCOUNT_NUMBER:role/lamda-orchestration-role --group system:masters --username admin

eksctl create iamidentitymapping --region $AWS_REGION --cluster cluster-b --arn arn:aws:iam::$AWS_ACCOUNT_NUMBER:role/lamda-orchestration-role --group system:masters --username admin

You will also push your Open5GS image and helm charts by following the below process to create a Helm chart repository.

Section 4: Package and Push Open5GS Images to Your Private Repo

For security and maintenance purpose, it is better to create separate repositories for different functions.

To create a private repository for Open5GS images, build an Open5GS image and push it to the created repository, enter the following code:

###Create an ECR repo (private repo):

aws --region ${AWS_REGION} ecr create-repository --repository-name core5g/open5gs-aio

aws --region ${AWS_REGION} ecr create-repository --repository-name core5g/open5gs-web

N.B - Docker commands might require sudo privileges.

###Build and tag the container open5gs aio (all in one, this will be used for all the CNF except the web-gui) image:

docker build -t ${AWS_ACCOUNT_NUMBER}.dkr.ecr.${AWS_REGION}.amazonaws.com/core5g/open5gs-aio:v2.5.6-inotify -f open5gs-aio-dockerfile .

###Build and tag the container open5gs web-gui image:

docker build -t ${AWS_ACCOUNT_NUMBER}.dkr.ecr.${AWS_REGION}.amazonaws.com/core5g/open5gs-web:v2.5.6 -f open5gs-web-dockerfile .

###Login to ECR and push images to your ECR repos:

aws ecr get-login-password --region ${AWS_REGION} | docker login --username AWS --password-stdin ${AWS_ACCOUNT_NUMBER}.dkr.ecr.<REGION>.amazonaws.com

docker push ${AWS_ACCOUNT_NUMBER}.dkr.ecr.${AWS_REGION}.amazonaws.com/core5g/open5gs-aio:v2.5.6-inotify

docker push ${AWS_ACCOUNT_NUMBER}.dkr.ecr.${AWS_REGION}.amazonaws.com/core5g/open5gs-web:v2.5.6

You can now proceed to use these ECR images in your Helm values file.

Section 5: Package and Push Helm Charts to Amazon ECR

To package and push the Open5GS Helm chart to Amazon ECR, follow these steps:

### Create ECR Repo

aws ecr create-repository --repository-name open5gs-charts --region $AWS_REGION

### Package charts

cd 5gcore-helm-inotify-charts/

helm package .

### Push to ECR Repo

aws ecr get-login-password \
     --region $AWS_REGION | helm registry login \
     --username AWS \
     --password-stdin $AWS_ACCOUNT_NUMBER.dkr.ecr.$AWS_REGION.amazonaws.com     

helm push open5gs-charts-0.0.4.tgz oci://$AWS_ACCOUNT_NUMBER.dkr.ecr.$AWS_REGION.amazonaws.com/ 

Section 6: Update values.yaml Files and Upload Them to the Amazon S3

In the Github repo, in the folder helm-values-file, you will find two files, values.yaml and values-test.yaml.

In both files, replace the repository with the name of the repository that you created in Step 4, the name should be similar to the example that follows:

${AWS_ACCOUNT_NUMBER}.dkr.ecr.${AWS_REGION}.amazonaws.com/core5g/open5gs-aio

${AWS_ACCOUNT_NUMBER}.dkr.ecr.${AWS_REGION}.amazonaws.com/core5g/open5gs-web

Values.yaml file

open5gs:
  image:
    repository: REPLACE_WITH_ECR_REPO_DETAILS
    pullPolicy: IfNotPresent #Always #IfNotPresent
    tag: "v2.5.6-inotify"  

k8swait:
   repository: groundnuty/k8s-wait-for
   pullPolicy: IfNotPresent
   tag: "v1.4"

webui:
  image:
    repository: REPLACE_WITH_ECR_REPO_DETAILS
    pullPolicy: IfNotPresent
    tag: "v2.5.6"
  ingress:

Once you have updated both the files, navigate to the Amazon S3 in the AWS console and click create bucket name in the same region that you are running the lab. For the bucket name, you must choose a universally unique name.

We propose something like the following:

<your initial or a unique string>-cnf-placement.

Once you have created the bucket, upload updated values.yaml and values-test.yaml files to the S3 bucket while keeping the default configuration. in particular Bucket Access Control Lists (ACLs) should be disabled. We use Amazon S3 to store values.yaml files for Helm charts and specify the S3 file location when calling the API Gateway, which in turn invokes Lambda function with the specified file. Keeping the values.yaml file in Amazon S3 provides a straightforward way for a service level orchestrator to indicate new configurations that can be applied. Since helm charts do not have to change with changes in values.yaml files, we do not need to recreate them and update ECR repositories or catalogue with each change in values.yaml file.

Section 7: API Gateway 

Now develop a REST API where you gain complete control over the request and response along with API management capabilities.

  • Go to the API Gateway in the AWS Console and select REST API.
  • Choose Build.
  • On the next page, select REST as the protocol and New API under Create new API.
  • Name your API as OrchestrationAPI and click Create API.
  • From the Actions dropdown click Create Method.
  • Then from the resulting dropdown, select Post.
  • Click the check mark to confirm.

Figure 6 – Creation of method in API Gateway

Figure 6 – Creation of method in API Gateway

For the setup:

14. Select integration type as the Lambda function, do not select use lambda proxy integration.

15. Write your lambda function name placeAndConfigureFunction.

Figure 7– Integration of API Gateway with Lambda function

Figure 7– Integration of API Gateway with Lambda function

Click OK on the resulting pop-up requesting to add permission to the Lambda function.

Step 8: Deployment and Testing

Click Test with the lightening sign in API Gateway and select your test example.

What follows is a sample body:

{
  "location": "us-east-2",
  "chart_repository": "oci://$AWS_ACCOUNT_NUMBER.dkr.ecr.$AWS_REGION.amazonaws.com/open5gs-charts",
  "values_yaml_file": "s3://S3_BUCKET_NAME/values.yaml",
  "chart_version": "0.0.4",
  "function_name": "core5g-1",
  "run_namespace": "open5gs-1",
  "eks_cluster_name": "cluster-a"
}

Be sure to replace the location field with the region you’ve been working in. Replace the values_yaml_file path with your s3 bucket name and the Open Container Image (OCI) compliant Amazon ECR repo with the one that was created in the previous step as shown in the earlier API section. Paste it into the body field and click Test.

Figure 8 – API Gateway Test Invocation

Figure 8 – API Gateway Test Invocation

The expected response should show the status code of 200, time taken for the response as well as the outcome of the status of deployment with the containers that are deployed and their status as part of Open5GS. In particular, it will look similar to below:

Figure 8 – Return response from test API Invocation

Figure 9 – Return response from test API Invocation

You can check the running pods by changing to the right cluster by using Kubectx commands as in Step 2. Since Lambda needs a warm-up time, you might get an error first time. If you do, just reissue the command.

To see pods in the namespace requested by the API call described prior and served by the created Lambda function, execute a command similar to the following:

kubectl -n open5gs-1 get pods

Step 9: Day 2 Configuration Update

Check the configuration of the Amazon Monitoring Framework (AMF). For this first execution into the pod, run a command similar to the following with your own namespace and deployment name:

kubectl -n open5gs-2 exec -it deploy/core5g-2-amf-deployment — /bin/sh

Once inside the pod, run the following to see the AMF configuration:

cat /open5gs/config-map/amf.yaml

Check the value of parameters tac and s-nssai .

Now, run the same API command as previously provided in Section 6 but with values-test.yaml file as opposed to values.yaml as the values_yaml_file.

What follows is a sample JSON file for running the test API with values-test.yaml file:

{    "location": "us-east-2",
    "chart_repository": "oci://$AWS_ACCOUNT_NUMBER.dkr.ecr.$AWS_REGION.amazonaws.com/open5gs-charts",
    "values_yaml_file": "s3://S3_BUCKET_NAME/values-test.yaml",
    "chart_version": "0.0.4",
    "function_name": "core5g-2",
    "run_namespace": "open5gs-2",
    "eks_cluster_name": "cluster-a"
}

Follow the same process as previously described to login to the AMF container and check the value of tac and s-nssai. You will notice that the values have changed. This is because of the difference in configuration set by values-test.yaml as opposed to values.yaml.

Step 10: Clean Up

Delete the resources that were used during this blog by going back to the CloudFormation page and rolling back the stacks that were created, deleting API Gateway and S3 buckets, deleting the Lambda function, and deleting the Amazon ECR repository.

To do so, follow these instructions:

16. Go to the CloudFormation console:

  • On the stacks page, select all the stacks that were created one-by-one in the stack details pane.
  • Choose Delete.
  • Select Delete Stack when prompted.
  • This process takes some time and can’t be stopped once begun.

17. Go to Amazon ECR repository console:

  • Go to the repository in the region of the lab.
  • Choose the Private tab and select the repository that was created.
  • Choose Delete and verify.

18. Go to API gateway dashboard:

  • Select the API that you created earlier
  • In actions, choose Delete.

19. Go to Amazon S3 dashboard:

  • Select the S3 bucket that you created.
  • Choose Delete.

20. Go to Lambda function console:

  • Select the Lambda function that you created.
  • Expand the actions dropdown, and click Delete.

Conclusion

In this blog, we have demonstrated a way to implement an agile workload placement method that provides a simple abstraction to business applications by using AWS container and serverless constructs. Specifically, we created a Lambda function and assigned it roles and permissions to access Amazon EKS clusters that we wanted to place our workloads in. We then created an API interface that facilitated an abstraction to define desired clusters and workloads. Finally, we used the API call to place a sample telco workload, Open5GS, into a desired cluster. The use of serverless technologies for this task provides an inherent resiliency in this solution implementation.

These constructs are flexible and can be adopted to meet CSPs and network function providers automation requirements. This approach is straightforward yet extendible. In particular, it can be extended to add analytics on data collected by using observability solutions. One way to achieve this could be by using additional Lambda functions that utilize those analytics. This new Lambda function can be invoked by the API gateway and can in turn invoke the Lambda function created above in this lab to provide additional levels of abstractions and intelligence.

For further information on AWS telco offerings, and how some of these constructs have been used with the service providers, please visit https://aws.amazon.com/telecom/.

Manjari Asawa

Manjari Asawa

Dr. Manjari Asawa is a Senior Solution Architect in the AWS Worldwide Telecom Business Unit Her focus areas are telco orchestration, assurance, and use cases for autonomous operation of networks. She received her Ph.D. in Electrical Engineering from the University of Michigan at Ann Arbor.

Christopher Adigun

Christopher Adigun

Christopher Adigun is a Telco Solutions Architect in the AWS Business Development and Strategic Industries. He works with Telecommunication customers in architecting their workloads using AWS services and cloud native principles with emphasis on containerization and user-plane acceleration design.