AWS for SAP

SAP Content Server High Availability using Amazon EFS and SUSE

Introduction

Earlier this year, SAP announced extension of mainstream maintenance for SAP Business Suite 7 core applications until end of 2027 followed by optional extended maintenance until the end of 2030. You can read details here. With this announcement, Customers migrating their SAP systems from end of life, traditional Data Centers to AWS have lot of options to choose from. Instead of upgrading / migrating to S4HANA, they can continue to use their existing SAP Business Suite 7 Systems on AWS as-is. With this option, they want to ensure their core SAP Applications and associated systems utilize the high redundancy and availability provided by AWS Infrastructure.

SAP Content Server is one such application, tightly integrated with SAP Business Suite 7. SAP Content Server is a standalone component in which a large quantity of electronic documents of any format and with any content can be stored. The documents can be saved either in one or more MaxDB instances or in the file system. Sales invoices, purchase orders, salary slips, emails, and architecture diagrams are some of the common examples of electronic documents stored in the SAP Content Server. The data held by SAP Content Server is critical. As such, customers demand resilient and highly available setup for this component.

In this blog post, we will provide steps to configure Highly available SAP Content Server 7.53 with MaxDB version 7.9 on AWS using Amazon Elastic File System (EFS). Though we are using SUSE High Availability Extension tool set for this blog, similar set up can also be achieved for Red Hat Enterprise Linux based systems. Also, MaxDB Database can be replaced with File System as SAP content server repository.

To match the SLA for the SAP Content Server with that of the OLTP systems on AWS, we have to understand the architecture of SAP Content Server. This includes identifying its single points of failure and building a resilient system spread across Availability Zones. The SAP Content Server consists of an application layer and a storage layer. In this blog, we are using SAP MaxDB database as the storage layer. Application layer is managed by the SAP’s sapcontrol service. With SUSE High Availability Extension (HAE), we can manage both application and storage layer resources and build an automated, highly available solution.

Scope

This document will specifically focus on setting up the cluster resource management layer for SAP Content Server version 7.53 (and above) running on SAP MaxDB 7.9 database on AWS using EFS file system and overlay IP construct. Please refer section 6.1.1 to 6.1.3 from this blog for setting up the cluster communication layer for SUSE High Availability cluster. In current blog, we will assume that these steps have been already performed.

Prerequisites

High-Availability setup for SAP solutions is technically complex activity and has a number of pre-requisites to be met before actual cluster configuration. This blog post will assume that you are familiar with general concepts of setting up the communication layer of SUSE HA cluster solution. We recommend you refer to this blog to understand concepts and requirements in detail. For the purpose of current blog, following are the pre-requisites that need to be met before starting the SAP system installation.

  • Admin Access: to AWS account with rights to

    • Create/Modify Security groups to allow ports required by cluster solution, network file system (NFS), etc.
    • Create/Mount an Amazon EFS file system(s) to run SAP Content Server and SAP MaxDB database
    • Create and launch Amazon Machine Images (AMI) to create secondary node
    • Modify Route table to add overlay IP
    • Create IAM policies and roles for EC2 instance(s) to be able to manage AWS resources on behalf of cluster
    • Create new tag for EC2 instances with a cluster specific tag name and instance hostname as the value
    • Disable the source/destination check for cluster instances
    • Provision a Network Load Balancer to connect to Content Server HTTP endpoint using overlay IP
    • Optional access to modify Route53 A records
  • Amazon EFS: provides a simple, scalable, fully managed elastic NFS file system for use with AWS cloud services and on-premises resources. As per SAP Note 2772496, EFS can be used as the storage layer to host SAP MaxDB database for SAP Content Server. The service is designed to be highly scalable, highly available, and highly durable. Amazon EFS file systems store data and metadata across multiple Availability Zones in an AWS Region. These attributes of EFS allows to eliminate the complex data replication requirements for setting up a standby for MaxDB database. As the throughput capacity of an EFS based file system depends on its provisioned capacity, we used a single EFS to mount all shareable SAP Content Server instance and MaxDB storage file systems.
    # mount efs-dns-name: /temp
    # mkdir -p /temp/sapdb /temp/sapmnt /temp/SYS /temp/sapdata /temp/saplog /temp/C00
    # umount /temp
  • Overlay IP: Overlay IP is a concept that allows the network traffic in a Amazon Virtual Private Cloud (VPC) to be redirected to EC2 instance irrespective to its subnet and availability zone placement. It uses a static IP address that is not part of current VPC CIDR range to route traffic using VPC route tables. The SUSE HA extension for AWS provides resources that can dynamically change the vpc routing tables in a failover scenario. Add the overlay IP to the eth0 interface of both cluster nodes using the following command:

    # ip address add OVERLAY-IP dev eth0

Solution Architecture

The following diagram shows the high availability architecture being discussed in this blog.

Solution Architecture diagram shows the reference Architecture described in the current blog.

The application and database layers of SAP Content Server will be installed in same EC2 instance. The secondary node in the cluster will be launched in a separate Availability Zone within the same AWS Region using an AMI backup of the primary EC2 instance. These EC2 instances are launched with an AWS Marketplace AMI for SUSE Linux Enterprise Server (SLES) for SAP Application. This AMI is pre-installed with SUSE High Availability Extension Set. Please note that in current architecture EBS volume is used only for root filesystem.

SAP Content Server Installation Steps

SAP Note 2786364 provides detailed steps to download, install and update SAP Content Server and SAP MaxDB database. SAP Content Server is installed as a single node server. Prior to the installation of the primary node, make sure all required file systems are mounted properly. Since, we are using EFS for creating these file systems, create the mountpoints using following commands:

# mkdir -p /usr/sap /usr/sap/CSX /usr/sap/CSX/SYS /usr/sap/CSX/C00 /sapmnt /sapdb /sapdb/data /sapdb/log

The file systems for SAP profiles, binary and MaxDB database binaries can be mounted using the fstab entries as following:

efs-dns-name:SYS /usr/sap/CSX/SYS nfs4 rw,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 0 0
efs-dns-name:sapmnt /sapmnt nfs4 rw,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 0 0
efs-dns-name:sapdb /sapdb nfs4 rw,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 0 0

Please note that previous arguments are just an example. For comprehensive list of options, refer recommended NFS mount options. Mount the SAP Content Server instance and SAP MaxDB database storage file systems (manually) using:

# mount -t efs-dns-name:sapdata /sapdb/data
# mount -t efs-dns-name:saplog /sapdb/log
# mount -t efs-dns-name:C00 /usr/sap/CSX/C00

These mount points will be managed by cluster and hence do not need to be hardcoded via fstab.

Once the SAP Content Server has been installed on the primary node with respective SAP MaxDB database, create an AMI backup and build the secondary node in the planned secondary Availability Zone. The fstab entries should be able to mount the basic file systems.

Configuration and Performance Considerations for Amazon EFS

At the time of creating Amazon EFS, customers can choose required performance mode and throughput mode, based on their requirements. With bursting throughput mode, throughput on Amazon EFS scales as the size of your file system in the standard storage class grows.

For example, at the time of writing this blog, 1024 GiB of EFS using general purpose performance mode with bursting throughput mode will provide 50 MiB/s baseline throughput and 100 MiB/s burst aggregate throughput with maximum burst duration of 720 min/day. You will also find some useful EFS performance tips at Amazon EFS Performance page.

HA Configuration Steps

Once the Corosync communication layer configuration for the cluster has been completed as mentioned in the Prerequisites section, you will have Corosync configuration file successfully created and you should be able to see the cluster status from both participating nodes, using following command:

# crm status 

Corosync configuration status on node1 of the cluster

Corosync configuration status on node2 of the cluster

At this stage you can use following steps to configure SAP Content Server HA running on MaxDB database:

  1. Set the cluster in maintenance mode:
    # crm configure property maintenance-mode="true"
  2. Define the cluster base, in a file, we are using crm-bs.txt:
    property cib-bootstrap-options: \
        stonith-enabled="true" \
        stonith-action="off" \
        stonith-timeout="600s"
    rsc_defaults rsc-options: \
            resource-stickiness=1 \
            migration-threshold=3
    op_defaults op-options: \
            timeout=600 \
            record-pending=true
    
    # crm configure load update crm-bs.txt
    
  3. Define STONITH resource:
    •  Update EC2 instance tag with <anykey> and value <hostname>. A tag key named “pacemaker” has been used in this blog.
    •  Define an AWS profile with <anyname> and following configuration. A profile with “cluster” has been used in this blog.
      [profile cluster]
      region = <your region>
      output = text
    • AWS Specific STONITH resource definition in aws-stonith.txt:
      primitive res_AWS_STONITH stonith:external/ec2 \
      op start interval=0 timeout=180 \
      op stop interval=0 timeout=180 \
      op monitor interval=120 timeout=60 \
      params tag=pacemaker profile=cluster

      Load the STONITH resource definition into the cluster
      # crm configure load update aws-stonith.txt
  4. Configure content server cluster resource: (filename aws_cs.txt used here)
    primitive rsc_fs_CSX_C00 Filesystem \
        params  device="<efs-id>.efs.<your-region>.amazonaws.com:/C00" directory="/usr/sap/CSX/C00" \
                fstype="nfs4" \
                options="rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2" \
        op start timeout=60s interval=0 \
        op stop timeout=60s interval=0 \
        op monitor interval=200s timeout=40s
    primitive rsc_sap_CSX_C00 SAPInstance \
        operations $id=rsc_sap_CSX_C00-operations \
        op monitor interval=120 timeout=60 on-fail=restart \
        params InstanceName=CSX_C00_sapcontentserv \
            START_PROFILE="/usr/sap/CSX/SYS/profile/CSX_C00_sapcontentserv" \
            AUTOMATIC_RECOVER=false \
            MONITOR_SERVICES="sapcs" \
        meta resource-stickiness=5000 failure-timeout=60 \
            migration-threshold=1 priority=10
    
    group grp_CSX_C00 \
       rsc_fs_CSX_C00 rsc_sap_CSX_C00 \
            meta resource-stickiness=3000

    Load the SAP Content Server cluster resource configuration to the cluster:
    # crm configure load update aws_cs.txt

  5. Prepare SAP MaxDB to be controlled by SAP Host agent:
    • Mount DB file systems (Refer to the Content Server Installation steps)
    • Add the overlay IP to the EC2 instance using
      # ip address add <OVERLAY-IP> dev eth0
    • Follow SAP Note 2018919 – SAP MaxDB/SAPHostagent: Setting connect information as SetDatabaseProperty function and configure SAP HostAgent to access SAP MaxDB database.
      #/usr/sap/hostctrl/exe/saphostctrl -host <virtualhostname> \
      -user sapadm <password> -dbname <CSX> -dbtype ada \
      -function SetDatabaseProperty DBCredentials=SET \
      -dboption User=SUPERDBA -dboption Password=<password>
    • Detach the OIP and DB mount points from primary node. Repeat through the steps used previously on secondary node.
  6. Configure cluster resource for SAP MaxDB database: (filename aws_maxdb.txt used here)
    primitive rsc_ip_CSX_VIP ocf:suse:aws-vpc-move-ip \
        params  ip='192.168.10.10' routing_table=<rtb-xxxxxxxxxxxxxxx> \
                interface=eth0 profile=cluster \
        op start interval=0 timeout=180 \
        op stop interval=0 timeout=180 \
        op monitor interval=60 timeout=60
    primitive rsc_fs1_CSX_SDB Filesystem \
        params  device="<efs-id>.efs.<your-region>.amazonaws.com:/sapdata" directory="/sapdb/data" \
                fstype="nfs4" \
                options="rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2" \
        op start timeout=60s interval=0 \
        op stop timeout=60s interval=0 \
        op monitor interval=200s timeout=40s
    primitive rsc_fs2_CSX_SDB Filesystem \
        params  device="<efs-id>.efs.<your-region>.amazonaws.com:/saplog" directory="/sapdb/log" \
                fstype="nfs4" \
                options="rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2" \
        op start timeout=60s interval=0 \
        op stop timeout=60s interval=0 \
        op monitor interval=200s timeout=40s
    primitive rsc_sapdb_CSX_SDB ocf:heartbeat:SAPDatabase \
         params SID="CSX" DBTYPE="ADA" \
         op monitor interval="120s" timeout="60s" start_delay="180s" \
         op start interval="0" timeout="120s" \
         op stop interval="0" timeout="180s"
    
    
    group grp_CSX_SDB \
      rsc_ip_CSX_VIP rsc_fs1_CSX_SDB rsc_fs2_CSX_SDB rsc_sapdb_CSX_SDB \
            meta resource-stickiness=3000 \
             meta target-role="Started"

    Load the DB resource configuration to the cluster:
    # crm configure load update aws_maxdb.txt

  7. Configure the colocation and ordering constraints to make sure that Content Server and MaxDB are always starting on the same cluster node. Content server is always started after the MaxDB database. (we have used filename crm_col.txt)
    colocation col_sap_CSX_both INFINITY: grp_CSX_SDB grp_CSX_C00
    order ord_sapdb_first_start Mandatory: rsc_sapdb_CSX_SDB:start rsc_fs_CSX_C00:start rsc_sap_CSX_C00:start sequential="true
  8. Activate the cluster and monitor the Cluster status :
    # crm configure property maintenance-mode="false"
    # crm status

    Content Server cluster status with all SAP resources running on node contentsrv2

Test SAP Content Server Failover

  1. Testing the failover
    • To move SAP Content Server to other node manually using sapcontrol utility, as <sid>adm user on the active node, run:
      # sapcontrol -nr 00 -function HAFailoverToNode ""
      Content Server cluster status with all SAP resources running on node contentsrv1 after failover
  2. Monitor the SAP Content Server status:
    • Using Content Server HTTP URL(NLB or Route 53 based URL):
      • Before failover
        Content Server status using HTTP endpoint for SAP Content Server
      • After failover
        Content Server status using HTTP endpoint for SAP Content Server after failover
    • USing CSADMIN transaction code in SAP system:
      • Before Failover
        Content Server status using SAP CSADMIN transaction code
      • After Failover
        Content Server status using SAP CSADMIN transaction code
    • Using RSCMST program in SAP System – SAP NOte2888195 – Content Server 7.53 and report RSCMST
      • Before Failover
        Content Server doagnosis check using SAP Report RSCMST
      • After Failover
        Content Server doagnosis check using SAP Report RSCMST

Conclusion

In this blog, we have seen how to set up High Availability Architecture for SAP Content Server 7.53 using inbuilt capability of Amazon EFS with the help of SUSE High Availability Extension set. Incorporating Amazon EFS reduces the complexity of this Solution. All files and directories in Amazon EFS are redundantly stored within and across Availability Zones.

Get Started

In this blog, we have used two Availability Zones to develop the solution. Customers can go beyond that and setup a high availability solution across all Availability Zones within a given AWS Region. Also, instead of using SAP MaxDB database, customers can simplify the solution by replacing the SAP Content Server storage layer with file based repository using Amazon EFS. Please contact us at sap-on-aws@amazon.com for any questions or visit aws.com/sap to learn more.