AWS Architecture Blog
Deploying IBM Cloud Pak for Data on Red Hat OpenShift Service on AWS
Editor’s note, October 2024: This post is now obsolete. For the latest post, refer to Deploying IBM Cloud Pak for Data on Red Hat OpenShift Service on AWS.
Amazon Web Services (AWS) customers who want to deploy and use IBM Cloud Pak for Data (CP4D) on the AWS Cloud, can use Red Hat OpenShift Service on AWS (ROSA).
ROSA is a fully managed service, jointly supported by AWS and Red Hat. It is managed by Red Hat Site Reliability Engineers and provides a pay-as-you-go pricing model, as well as a unified billing experience on AWS.
With this, customers do not manage the lifecycle of Red Hat OpenShift Container Platform clusters. Instead, they are free to focus on developing new solutions and innovating faster, using IBM’s integrated data and artificial intelligence platform on AWS, to differentiate their business and meet their ever-changing enterprise needs.
In this post, we explain how to create a ROSA classic cluster and install an instance of IBM Cloud Pak for Data
Cloud Pak for data architecture
Here, we are implementing a highly available ROSA classic cluster with three Availability Zones (AZs), three master nodes, three infrastructure nodes, and three worker nodes.
Review the AWS Regions and Availability Zones documentation and the regions where ROSA is available to choose the best region for your deployment.
Figure 1 demonstrates the solution’s architecture.
In our scenario, we are building a public ROSA classic cluster, with internet-facing Elastic Load Balancers providing access to our cluster. Consider using a ROSA private cluster when you are deploying CP4D in your AWS account.
We are using Amazon Elastic Block Store (Amazon EBS) and Amazon Elastic File System (Amazon EFS) for the cluster’s persistent storage. Review the IBM documentation for information about supported storage options.
Also, review the AWS prerequisites for ROSA and follow the Security best practices in IAM documentation, before deploying CP4D for production workloads, to protect your AWS account before deploying CP4D.
Cost
You are responsible for the cost of the AWS services used when deploying CP4D in your AWS account. For cost estimates, see the pricing pages for each AWS service you use.
Prerequisites
Before getting started, review the following prerequisites for this solution:
- This blog assumes familiarity with: CP4D, Terraform, Amazon Elastic Compute Cloud (Amazon EC2), Amazon EBS, Amazon EFS, Amazon Virtual Private Cloud, and AWS Identity and Access Management (IAM).
- Access to an AWS account, with permissions to create the resources described in the installation steps section.
- An AWS IAM user, with the permissions described in the AWS prerequisites for ROSA documentation.
- Verification of the required AWS service quotas to deploy ROSA. You can request service-quota increases from the AWS console.
- Access to an IBM entitlement API key: either a 60-day trial or an existing entitlement.
- Access to a Red Hat ROSA token; you can register on the Red Hat website to obtain one.
- A bastion host to run the CP4D installer; we have used and AWS Cloud9 workspace. You can use another device, provided it supports the required software packages:
Installation steps
Complete the following steps to deploy CP4D on ROSA:
- Navigate to the ROSA console to enable the ROSA service:
- Click Get started.
- On the Verify ROSA prerequisites page, select I agree to share my contact information with Red Hat.
- Choose Enable ROSA.
- Create an AWS Cloud9 environment to run your CP4D installation. We’ve used a t3.medium instance (Figure 2).
- After your AWS Cloud9 environment is up, close the Welcome tab and open a new Terminal tab and install the required packages:
$ curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" $ unzip awscliv2.zip $ sudo ./aws/install $ sudo yum -y install jq gettext $ sudo wget -c https://mirror.openshift.com/pub/openshift-v4/clients/rosa/latest/rosa-linux.tar.gz -O - | sudo tar -xz -C /usr/local/bin/ $ export OPENSHIFT_VERSION=4.14.30 $ sudo wget -c https://mirror.openshift.com/pub/openshift-v4/x86_64/clients/ocp/${OPENSHIFT_VERSION}/openshift-client-linux-${OpenshiftVersion}.tar.gz -O - | sudo tar -xz -C /usr/local/bin/
- Create an IAM policy named cp4d-installer-permissions with the following permissions:
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "autoscaling:DescribeAutoScalingGroups", "cloudwatch:GetMetricData", "ec2:AllocateAddress", "ec2:AssociateAddress", "ec2:AssociateDhcpOptions", "ec2:AssociateRouteTable", "ec2:AttachInternetGateway", "ec2:AttachNetworkInterface", "ec2:AuthorizeSecurityGroupEgress", "ec2:AuthorizeSecurityGroupIngress", "ec2:CopyImage", "ec2:CreateDhcpOptions", "ec2:CreateInternetGateway", "ec2:CreateNatGateway", "ec2:CreateNetworkInterface", "ec2:CreateRoute", "ec2:CreateRouteTable", "ec2:CreateSecurityGroup", "ec2:CreateSubnet", "ec2:CreateTags", "ec2:CreateVolume", "ec2:CreateVpc", "ec2:CreateVpcEndpoint", "ec2:CreateVpcEndpointServiceConfiguration", "ec2:DeleteDhcpOptions", "ec2:DeleteInternetGateway", "ec2:DeleteNatGateway", "ec2:DeleteNetworkInterface", "ec2:DeleteRoute", "ec2:DeleteRouteTable", "ec2:DeleteSecurityGroup", "ec2:DeleteSnapshot", "ec2:DeleteSubnet", "ec2:DeleteTags", "ec2:DeleteVolume", "ec2:DeleteVpc", "ec2:DeleteVpcEndpointServiceConfigurations", "ec2:DeleteVpcEndpoints", "ec2:DeregisterImage", "ec2:DescribeAccountAttributes", "ec2:DescribeAddresses", "ec2:DescribeAvailabilityZones", "ec2:DescribeDhcpOptions", "ec2:DescribeImages", "ec2:DescribeInstanceAttribute", "ec2:DescribeInstanceCreditSpecifications", "ec2:DescribeInstanceStatus", "ec2:DescribeInstanceTypeOfferings", "ec2:DescribeInstanceTypes", "ec2:DescribeInstances", "ec2:DescribeInternetGateways", "ec2:DescribeKeyPairs", "ec2:DescribeNatGateways", "ec2:DescribeNetworkAcls", "ec2:DescribeNetworkInterfaces", "ec2:DescribePrefixLists", "ec2:DescribeRegions", "ec2:DescribeReservedInstancesOfferings", "ec2:DescribeRouteTables", "ec2:DescribeSecurityGroups", "ec2:DescribeSecurityGroupRules", "ec2:DescribeSubnets", "ec2:DescribeTags", "ec2:DescribeVolumes", "ec2:DescribeVpcAttribute", "ec2:DescribeVpcClassicLink", "ec2:DescribeVpcClassicLinkDnsSupport", "ec2:DescribeVpcEndpointServiceConfigurations", "ec2:DescribeVpcEndpointServicePermissions", "ec2:DescribeVpcEndpointServices", "ec2:DescribeVpcEndpoints", "ec2:DescribeVpcs", "ec2:DetachInternetGateway", "ec2:DisassociateRouteTable", "ec2:GetConsoleOutput", "ec2:GetEbsDefaultKmsKeyId", "ec2:ModifyInstanceAttribute", "ec2:ModifyNetworkInterfaceAttribute", "ec2:ModifySubnetAttribute", "ec2:ModifyVpcAttribute", "ec2:ModifyVpcEndpointServicePermissions", "ec2:ReleaseAddress", "ec2:ReplaceRouteTableAssociation", "ec2:RevokeSecurityGroupEgress", "ec2:RevokeSecurityGroupIngress", "ec2:RunInstances", "ec2:StartInstances", "ec2:StopInstances", "ec2:TerminateInstances", "elasticfilesystem:CreateFileSystem", "elasticfilesystem:CreateMountTarget", "elasticfilesystem:DeleteFileSystem", "elasticfilesystem:DeleteMountTarget", "elasticfilesystem:DescribeFileSystems", "elasticfilesystem:DescribeMountTargets", "elasticfilesystem:TagResource", "elasticloadbalancing:AddTags", "elasticloadbalancing:ApplySecurityGroupsToLoadBalancer", "elasticloadbalancing:AttachLoadBalancerToSubnets", "elasticloadbalancing:ConfigureHealthCheck", "elasticloadbalancing:CreateListener", "elasticloadbalancing:CreateLoadBalancer", "elasticloadbalancing:CreateLoadBalancerListeners", "elasticloadbalancing:CreateTargetGroup", "elasticloadbalancing:DeleteLoadBalancer", "elasticloadbalancing:DeleteTargetGroup", "elasticloadbalancing:DeregisterInstancesFromLoadBalancer", "elasticloadbalancing:DeregisterTargets", "elasticloadbalancing:DescribeAccountLimits", "elasticloadbalancing:DescribeInstanceHealth", "elasticloadbalancing:DescribeListeners", "elasticloadbalancing:DescribeLoadBalancerAttributes", "elasticloadbalancing:DescribeLoadBalancers", "elasticloadbalancing:DescribeTags", "elasticloadbalancing:DescribeTargetGroupAttributes", "elasticloadbalancing:DescribeTargetGroups", "elasticloadbalancing:DescribeTargetHealth", "elasticloadbalancing:ModifyLoadBalancerAttributes", "elasticloadbalancing:ModifyTargetGroup", "elasticloadbalancing:ModifyTargetGroupAttributes", "elasticloadbalancing:RegisterInstancesWithLoadBalancer", "elasticloadbalancing:RegisterTargets", "elasticloadbalancing:SetLoadBalancerPoliciesOfListener", "iam:AddRoleToInstanceProfile", "iam:AttachRolePolicy", "iam:CreateInstanceProfile", "iam:CreateOpenIDConnectProvider", "iam:CreatePolicyVersion", "iam:CreateRole", "iam:DeleteInstanceProfile", "iam:DeletePolicyVersion", "iam:DeleteRole", "iam:DetachRolePolicy", "iam:GetInstanceProfile", "iam:GetPolicy", "iam:GetRole", "iam:GetRolePolicy", "iam:GetUser", "iam:ListAttachedRolePolicies", "iam:ListInstanceProfiles", "iam:ListInstanceProfilesForRole", "iam:ListPolicyTags", "iam:ListPolicyVersions", "iam:ListRolePolicies", "iam:ListRoleTags", "iam:ListRoles", "iam:ListUserPolicies", "iam:ListUsers", "iam:PassRole", "iam:RemoveRoleFromInstanceProfile", "iam:SimulatePrincipalPolicy", "iam:TagInstanceProfile", "iam:TagOpenIDConnectProvider", "iam:TagPolicy", "iam:TagRole", "iam:UntagRole", "kms:DescribeKey", "route53:ChangeResourceRecordSets", "route53:ChangeTagsForResource", "route53:CreateHostedZone", "route53:DeleteHostedZone", "route53:GetAccountLimit", "route53:GetChange", "route53:GetHostedZone", "route53:ListHostedZones", "route53:ListHostedZonesByName", "route53:ListResourceRecordSets", "route53:ListTagsForResource", "route53:UpdateHostedZoneComment", "s3:CreateBucket", "s3:DeleteBucket", "s3:DeleteObject", "s3:DeleteObjectVersion", "s3:GetAccelerateConfiguration", "s3:GetBucketAcl", "s3:GetBucketCORS", "s3:GetBucketLocation", "s3:GetBucketLogging", "s3:GetBucketObjectLockConfiguration", "s3:GetBucketPolicy", "s3:GetBucketRequestPayment", "s3:GetBucketTagging", "s3:GetBucketVersioning", "s3:GetBucketWebsite", "s3:GetEncryptionConfiguration", "s3:GetLifecycleConfiguration", "s3:GetObject", "s3:GetObjectAcl", "s3:GetObjectTagging", "s3:GetObjectVersion", "s3:GetReplicationConfiguration", "s3:ListBucket", "s3:ListBucketVersions", "s3:PutBucketAcl", "s3:PutBucketTagging", "s3:PutBucketVersioning", "s3:PutEncryptionConfiguration", "s3:PutObject", "s3:PutObjectAcl", "s3:PutObjectTagging", "secretsmanager:GetSecretValue", "servicequotas:GetServiceQuota", "servicequotas:ListAWSDefaultServiceQuotas", "servicequotas:ListServiceQuotas", "sts:AssumeRole", "sts:AssumeRoleWithWebIdentity", "sts:GetCallerIdentity", "tag:GetResources", "tag:UntagResources" ], "Resource": "*" } ] }
- Create an IAM role:
1. Select an AWS service and Amazon EC2, then click Next: Permissions.
2. Select the cp4d-installer-permissions policy, and click Next.
3. Name it cp4d-installer, and click Create role. - From your AWS Cloud9 IDE, click the circle button on the top right, and select Manage EC2 Instance (Figure 3).
- On the Amazon EC2 console, select the AWS Cloud9 instance, then choose Actions / Security / Modify IAM Role.
- Choose cp4d-installer from the IAM Role drop down, and click Update IAM role (Figure 4).
- Update the IAM settings for your AWS Cloud9 workspace:
aws cloud9 update-environment --environment-id $C9_PID --managed-credentials-action DISABLE rm -vf ${HOME}/.aws/credentials
- Set up your AWS environment:
$ export ACCOUNT_ID=$(aws sts get-caller-identity --output text --query Account) $ export AWS_REGION=$(curl -H "X-aws-ec2-metadata-token: $TOKEN" http://169.254.169.254/latest/dynamic/instance-identity/document | jq -r '.region') $ aws configure set default.region ${AWS_REGION}
- Navigate to the Red Hat Hybrid Cloud Console, and copy your OpenShift Cluster Manager API Token.
- Use the token and log in to your Red Hat account:
rosa login --token=<YOUR_ROSA_API_TOKEN>
- Verify that your AWS account satisfies the quotas to deploy your cluster:
rosa verify quota
- When deploying ROSA for the first time, create the account-wide roles:
rosa create account-roles --mode auto –yes
- Create your ROSA cluster:
$ export CLUSTER_NAME=<YOUR_CLUSTER_NAME> $ export ROSA_VERSION=4.14.30 $ rosa create cluster --cluster-name ${CLUSTER_NAME} --sts \ --multi-az \ --region $AWS_REGION \ --version $ROSA_VERSION \ --compute-machine-type m6i.4xlarge \ --replicas 3 \ --availability-zones ${AWS_REGION}a,${AWS_REGION}b,${AWS_REGION}c \ --operator-roles-prefix $CLUSTER_NAME \ --mode auto --yes \ --watch
- Once your cluster is ready, create a cluster-admin user and take note of the cluster API URL, username, and password:
rosa create admin --cluster=${CLUSTER_NAME}
- Log in to your cluster using the login information from the previous step. For example:
oc login https://<YOUR_CLUSTER_API_ADDRESS>:6443 \ --username cluster-admin \ --password <YOUR_CLUSTER_ADMIN_PASSWORD>
- Create an inbound rule in your worker nodes security group, allowing NFS traffic from your cluster’s VPC CIDR:
WORKER_NODE=$(oc get nodes --selector=node-role.kubernetes.io/worker -o jsonpath='{.items[0].metadata.name}') VPC_ID=$(aws ec2 describe-instances --filters "Name=private-dns-name,Values=$WORKER_NODE" --query 'Reservations[*].Instances[*].{VpcId:VpcId}' | jq -r '.[0][0].VpcId') VPC_CIDR=$(aws ec2 describe-vpcs --filters "Name=vpc-id,Values=$VPC_ID" --query 'Vpcs[*].CidrBlock' | jq -r '.[0]') SG_ID=$(aws ec2 describe-instances --filters "Name=private-dns-name,Values=$WORKER_NODE" --query 'Reservations[*].Instances[*].{SecurityGroups:SecurityGroups}' | jq -r '.[0][0].SecurityGroups[0].GroupId') aws ec2 authorize-security-group-ingress \ --group-id $SG_ID \ --protocol tcp \ --port 2049 \ --cidr $VPC_CIDR | jq .
- Create an Amazon EFS file system:
EFS_ID=$(aws efs create-file-system --performance-mode generalPurpose --encrypted --region ${AWS_REGION} --tags Key=Name,Value=ibm_cp4d_fs | jq -r '.FileSystemId') SUBNETS=($(aws ec2 describe-subnets --filters "Name=vpc-id,Values=${VPC_ID}" "Name=tag:Name,Values=*${CLUSTER_NAME}*private*" | jq --raw-output '.Subnets[].SubnetId')) for subnet in ${SUBNETS[@]}; do aws efs create-mount-target \ --file-system-id $EFS_ID \ --subnet-id $subnet \ --security-groups $SG_ID done
- Log in to Container software library on My IBM and copy your API key.
- In this blog, we are installing CP4D with IBM Watson Machine Learning and IBM Watson Studio.
- Review the IBM documentation to determine which CP4D components you need to install to support your requirements.
- Export environment variables for the CP4D installation. The COMPONENTS variable defines which services will be installed:
$ export OCP_URL=<https://YOUR_CLUSTER_API_ADDRESS:6443> $ export OPENSHIFT_TYPE=ROSA $ export IMAGE_ARCH=amd64 $ export OCP_USERNAME=cluster-admin $ export OCP_PASSWORD=<YOUR_CLUSTER_ADMIN_PASSWORD> $ export SERVER_ARGUMENTS="--server=${OCP_URL}" $ export LOGIN_ARGUMENTS="--username=${OCP_USERNAME} --password=${OCP_PASSWORD}" $ export CPDM_OC_LOGIN="cpd-cli manage login-to-ocp ${SERVER_ARGUMENTS} ${LOGIN_ARGUMENTS}" $ export OC_LOGIN="oc login ${OCP_URL} ${LOGIN_ARGUMENTS}" $ export PROJECT_CERT_MANAGER=ibm-cert-manager $ export PROJECT_LICENSE_SERVICE=ibm-licensing $ export PROJECT_SCHEDULING_SERVICE=cpd-scheduler $ export PROJECT_CPD_INST_OPERATORS=cpd-operators $ export PROJECT_CPD_INST_OPERANDS=cpd $ export STG_CLASS_BLOCK=gp3-csi $ export STG_CLASS_FILE=efs-nfs-client $ export IBM_ENTITLEMENT_KEY=<YOUR_IBM_API_KEY> $ export VERSION=5.0.0 $ export COMPONENTS=ibm-cert-manager,ibm-licensing,scheduler,cpfs,cpd_platform,ws,wml $ export EFS_LOCATION=${EFS_ID}.efs.${AWS_REGION}.amazonaws.com $ export EFS_PATH=/ $ export PROJECT_NFS_PROVISIONER=nfs-provisioner $ export EFS_STORAGE_CLASS=efs-nfs-client $ export NFS_IMAGE=k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
- Download and install the CP4D cli as per supported Cloud Pak for Data version:
$ curl -v https://icr.io $ mkdir -p ibm-cp4d && wget https://github.com/IBM/cpd-cli/releases/download/v14.0.0/cpd-cli-linux-EE-14.0.0.tgz -O - | tar -xz -C ~/environment/ibm-cp4d --strip-components=1 $ export PATH=/home/ec2-user/environment/ibm-cp4d:$PATH $ cpd-cli manage restart-container
- Log in to your ROSA cluster:
cpd-cli manage login-to-ocp --username=${OCP_USERNAME} \ --password=${OCP_PASSWORD} --server=${OCP_URL}
- Set up persistent storage for your cluster:
cpd-cli manage setup-nfs-provisioner \ --nfs_server=${EFS_LOCATION} --nfs_path=${EFS_PATH} \ --nfs_provisioner_ns=${PROJECT_NFS_PROVISIONER} \ --nfs_storageclass_name=${EFS_STORAGE_CLASS} \ --nfs_provisioner_image=${NFS_IMAGE}
- Create projects to deploy the CP4D software:
$ oc new-project ${PROJECT_CPD_INST_OPERATORS} $ oc new-project ${PROJECT_CPD_INST_OPERANDS}
- Modify load balancer timeout settings to prevent connections from being closed before processes complete:
LOAD_BALANCER=`aws elb describe-load-balancers --output text | grep $VPC_ID | awk '{ print $5 }' | cut -d- -f1 | xargs` for lbs in ${LOAD_BALANCER[@]}; do aws elb modify-load-balancer-attributes \ --load-balancer-name $lbs \ --load-balancer-attributes "{\"ConnectionSettings\":{\"IdleTimeout\":600}}" done
- Configure the global image pull-secret to pull images from the IBM container repository:
$ cpd-cli manage add-icr-cred-to-global-pull-secret \ --entitled_registry_key=${IBM_ENTITLEMENT_KEY}
- Install certificate manager and the license service:
$ cpd-cli manage apply-cluster-components \ --release=${VERSION} \ --license_acceptance=true \ --cert_manager_ns=${PROJECT_CERT_MANAGER} \ --licensing_ns=${PROJECT_LICENSE_SERVICE} $ cpd-cli manage apply-scheduler \ --release=${VERSION} \ --license_acceptance=true \ --scheduler_ns=${PROJECT_SCHEDULING_SERVICE}
- Apply the required permissions by running authorize-instance-topology:
$ cpd-cli manage authorize-instance-topology \ --cpd_operator_ns=${PROJECT_CPD_INST_OPERATORS} \ --cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS}
- Install the CP4D foundational services:
$ cpd-cli manage setup-instance-topology \ --release=${VERSION} \ --cpd_operator_ns=${PROJECT_CPD_INST_OPERATORS} \ --cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \ --license_acceptance=true \ --block_storage_class=${STG_CLASS_BLOCK}
- Create the operators and operator subscriptions for your CP4D installation:
$ cpd-cli manage apply-olm \ --release=${VERSION} \ --components=${COMPONENTS} \ --cpd_operator_ns=${PROJECT_CPD_INST_OPERATORS}
- Install the CP4D platform and services:
$ cpd-cli manage apply-cr \ --components=${COMPONENTS} \ --release=${VERSION} \ --cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \ --block_storage_class=${STG_CLASS_BLOCK} \ --file_storage_class=${STG_CLASS_FILE} \ --license_acceptance=true
- Get your CP4D URL and admin credentials:
$ cpd-cli manage get-cpd-instance-details \ --cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \ --get_admin_initial_credentials=true
- The command output will display the URL of your CP4D and the password for your Admin user (Figure 5):
- Using the information from the previous steps (CP4D URL, User, Admin Password), access your CP4D console.
- From the CP4D home (welcome page), click on Discover Services to be directed to the Services catalog.
- From the Services catalog, you can see all CP4D available services.
- Use the search bar to filter for Watson, and find the IBM Watson Machine Learning and IBM Watson Studio services. Note how they are displayed as Enabled (Figure 6).
Congratulations! You have successfully deployed IBM CP4D on Red Hat OpenShift on AWS.
Post-installation
Review the following topics, when you installing CP4D on production:
- Review the IBM system requirements documentation to calculate the size of your ROSA cluster.
- Review the administrative tasks to enable security, maintenance, monitoring, managing users, and backing up your environment.
- How to setup services after you have installed the platform.
- Configure identity providers on ROSA.
- Enable auto scaling for your ROSA cluster.
- Configure logging and enable monitoring for your ROSA cluster.
Cleanup
Connect to your AWS Cloud9 workspace, and run the following steps to delete the CP4D installation, including ROSA. This avoids incurring future charges on your AWS account:
EFS_ID=$(aws efs describe-file-systems \
--query 'FileSystems[?Name==`ibm_cp4d_fs`].FileSystemId' \
--output text)
MOUNT_TARGETS=$(aws efs describe-mount-targets --file-system-id $EFS_ID --query 'MountTargets[*].MountTargetId' --output text)
for mt in ${MOUNT_TARGETS[@]}; do
aws efs delete-mount-target --mount-target-id $mt
done
aws efs delete-file-system --file-system-id $EFS_FS_ID
rosa delete cluster -c $CLUSTER_NAME --yes --region $AWS_REGION
To monitor your cluster uninstallation logs, run:
rosa logs uninstall -c $CLUSTER_NAME --watch
Once the cluster is uninstalled, remove the operator-roles
and oidc-provider
, as informed in the output of the rosa delete
command. For example:
rosa delete operator-roles -c <OPERATOR_ROLES_NAME> -m auto -y
rosa delete oidc-provider -c <OIDC_PROVIDER_NAME> -m auto -y
Conclusion
In summary, we explored how customers can take advantage of a fully managed OpenShift service on AWS to run IBM CP4D. With this implementation, customers can focus on what is important to them, their workloads, and their customers, and less on the day-to-day operations of managing OpenShift to run CP4D.
If you are interested in learning more about CP4D on AWS, explore the IBM Cloud Pak for Data (CP4D) on AWS Modernization Workshop.
Visit the AWS Marketplace for IBM Cloud Pak for Data offers.
Further reading
- Building a healthcare data pipeline on AWS with IBM Cloud Pak for Data
- IBM Cloud Pak for Data Simplifies and Automates How You Turn Data into Insights
- Accelerate Data Modernization and AI with IBM Databases on AWS
- Build a Modern Data Architecture on AWS with your IBM Z Mainframe
- Making Data-Driven Decisions with IBM watsonx.data, an Open Data Lakehouse on AWS