AWS Partner Network (APN) Blog

How to Modernize a Replatformed Mainframe Development Lifecycle with AWS and NTT DATA

By Satya Samudrala, Software Dev. Sr. Specialist Advisor – NTT DATA Services

NTT-DATA-Services-AWS-Partners

Mainframe applications are at the core of many organizations and are still being used across industries such as banking and financial which require large amounts of transaction data to be processed.

To increase business agility and keep pace with the rapid changes in the industry, the development lifecycle for mainframe applications should be accelerated.

NTT DATA and Amazon Web Services (AWS) help organizations replatform their legacy mainframe applications to the cloud and implement a DevOps solution for the replatformed applications to take advantage of the agility, flexibility, and elasticity of the AWS Cloud.

This post discusses how organizations who replatform their legacy mainframe applications to AWS using the NTT DATA UniKix product suite can implement a modern DevOps workflow with NTT DATA tools and select AWS services.

The CI/CD pipeline explained in this post will help organizations speed up the development, testing, and release processes and, in turn, improve their time to market.

NTT DATA is an AWS Premier Consulting Partner with the Mainframe Migration Competency that helps clients navigate and simplify the modern complexities of business and technology.

Mainframe Development Challenges

The following challenges can be addressed with the NTT DATA UniKix product suite and select AWS services by replatforming the mainframe applications to Linux and implementing DevOps.

Limited Modern Development Features

Mainframe development tools typically lack features provided by modern integrated development environments (IDE) such as syntax highlighting, auto code completion, smart editing, instant compilation, debugging, and testing.

These outdated development tools limit the developer’s productivity. Additionally, access may be limited to some of the products used by other development teams across the organization since it may lack z/OS support.

Complex and Inefficient Software Release Operations

Mainframe build and release processes are very complex, manual, and at times inefficient, often taking hours or days to complete. These processes cannot keep up with the increasing pace and frequency of business changes.

Even small changes in business requirements can delay timelines and result in significant rework. This results in longer response time for any business changes and eventually delayed time-to-market.

Development Tools and Processes Misaligned with Other Teams

The different processes of a mainframe application development lifecycle are typically not integrated, and the various teams involved often work in silos.

As enterprises build new software solutions to integrate front and back-office applications, the rapid development and deployment of web and mobile applications don’t always align with the slower moving development of backend mainframe applications. This may lead to miscommunication and delays, resulting in loss of time and business.

Retaining and Attracting Talent

It’s difficult to find people with the right skill set to support mainframe applications. To attract a new generation of developers, mainframe development environments should shift towards the use of modern tools that are more familiar with today’s development culture.

Achieving Enterprise Agility with DevOps

NTT DATA recommends an incremental and iterative approach to enable DevOps for replatformed mainframe applications. This can begin with a small goal, such as introducing automation to a simple and repetitive process, and then applying the lessons learned toward increasingly larger goals.

An incremental and iterative approach minimizes the risk and costs of DevOps adoption and preserves the reliability of existing processes while building the necessary in-house skill sets.

The recommended three-step approach is:

  1. Roadmap: Analyze the existing processes and pipeline, identify those processes that require improvement, and develop a roadmap to address the pain areas.
  2. Pilot: Identify early adopters and implement the minimum DevOps approach to measure benefits.
  3. Iterate: With all of the experiences and lessons learned in implementing the initial DevOps process, continue with the next set of processes as continuous improvement. With these small and incremental changes, you can measure the benefits of every change and shift toward culture change for effective and seamless collaboration.

Legacy development teams can become more agile by introducing modern IDEs for application development, replacing fixed environments with cloud-based on-demand environments, and using a CI/CD model for quicker time-to-market.

CI/CD Automation for Enterprises

CI/CD enables application development teams to deliver code changes more frequently and reliably. It relies on continuous automation and monitoring throughout the lifecycle of applications, from code building and testing stages to delivery and deployment.

With continuous integration, developers commit code into a shared repository, preferably several times a day. Developers may run unit tests on their code locally before integrating. Each integration can then be verified by an automated build and test run to immediately report if any errors exist.

The key objectives of continuous integration are to find and address bugs quicker, improve quality, and release changes frequently.

With continuous delivery, the code changes are built, tested, and automatically released to a staging or production environment. The key objectives of continuous delivery are to automate the release process, eliminate problems usually caused by manual deployment processes, and deliver changes faster, thereby improving time-to-market.

There are numerous tools available for different stages of the CI/CD pipeline. The same pipeline may not be the right fit for all applications, so it’s good practice to design pipelines as modular as possible, and then reuse the tools from existing pipelines. This minimizes risks and reduces costs involved.

The DevOps tools for replatformed mainframe application development and testing should support compilation, build, and test COBOL, as well as other mainframe technologies. They should also support communication protocols and interfaces, such as 3270 terminals.

CI/CD with NTT DATA and AWS 

The figure below shows the different NTT DATA tools and AWS services that can provide automation in each stage, and across each stage in a CI/CD pipeline for replatformed mainframe applications.

NTT-Mainframe-Development-1

Figure 1 – CI/CD stages and services.

  • NTT DATA Enterprise Application IDE for Visual Studio and Eclipse – The IDE makes it easy for developers to write and modify code. Developers can benefit from the various features provided by IDE, including syntax highlighting, code completion, compiling, executing, and debugging COBOL programs. This IDE also supports translating CICS commands and JCLs. It uses IDE for development of COBOL application.
  • NTT DATA Build – This tool compiles COBOL programs, compiles BMS and MFS maps, and translates JCLs and PROCs.
  • NTT DATA UTM – The Universal Test Manager (UTM) is a testing tool to record, create, store, and execute test scenarios for both online and batch applications.
  • NTT DATA UCMI – The UniKix Cloud Monitor Integration (UCMI) is a software product in the UniKix suite that’s used to pull key metrics from UniKix environments (TPE, BPE) and push these to a cloud provider. It also integrates with the UniKix Cloud Controller (UCC) to pull metrics from UCC managed instances, aggregate the data, and push this to a cloud provider.
  • NTT DATA UCVO – The UniKix Universal Virtual Operator (UCVO) is an intelligent log monitoring agent that can perform user-specified “actions” for specific errors or warnings. UCVO works in conjunction with Amazon CloudWatch and CloudWatch Logs Agent to capture errors and warnings from TPE/BPE and to optionally perform mitigation functions. Some of the errors reported by TPE/BPE can be auto-corrected or fixed by implementing actions in UCVO. Development or operations teams can respond to issues faster by receiving auto alerts from UCVO.
  • AWS ServicesAWS CodePipeline, AWS CodeCommit, AWS CodeBuild, AWS CodeDeploy, Amazon CloudWatch, AWS Lambda, and Amazon S3.

CI/CD Pipeline Workflow 

The figure below illustrates end-to-end workflow of how a code change goes through various automated stages all the way from development to production environment using NTT DATA tools and AWS services.

NTT-Mainframe-Development-2

Figure 2 – CI/CD workflow.

The workflow involves the following:

  1. The developers of an application make changes to the source code using NTT DATA IDE.
  2. Developers push the changes to AWS CodeCommit repository.
  3. Configure a CloudWatch Event to trigger AWS CodePipeline whenever a change is pushed into the CodeCommit repository.
  4. AWS CodeCommit repository triggers CodePipeline with the help of the CloudWatch Event.
  5. The pipeline defined in CodePipeline downloads the code from the CodeCommit repository, and then initiates the build by invoking the NTT DATA Build tool.
  6. NTT DATA Build tool runs various steps, including but not limited to expanding copybooks, pre-compilation, compilation, and bind based on the source type. It then builds the executables.
  7. The executables are stored in Amazon S3.
  8. If the build steps (5-7) are successful, the pipeline triggers AWS CodeDeploy.
  9. AWS CodeDeploy gets the executables from S3, and then deploys them to the Dev environment.
  10. If deploy steps (8-9) are successful, the artifacts deployed can be tested using the NTT DATA UTM tool. UTM helps automate the execution of test cases.
  11. If the tests are successful, the next stage of the pipeline can be triggered manually to deploy the code to a subsequent environment.
  12. Steps 8-11 are repeated for each environment until the code is deployed to Production.
  13. UCMI collects the following metrics from the Production environment:
    • TPE metrics:
      • Region status
      • Transaction rate at the system level
      • Transactions aborted at the system level
      • Transactions waiting at the system level
      • Transaction rate at the transaction class level
      • Transactions aborted at the transaction class level
      • Transactions waiting at the transaction class level
      • Connected users
    • BPE Metrics:
      • Node status
      • Job classes active threads
      • Job classes available threads
      • Job classes pending threads
      • System status queued jobs
      • System status active jobs
      • Number of jobs terminated
      • Number of jobs completed
      • Number of jobs aborted
      • Number of jobs canceled
  14. UCMI pushes the metrics collected above to CloudWatch. These metrics can be seen in various graph formats using UCMI dashboard imported into CloudWatch.
  15. UCVO controller operates as a Lambda function and receives events from CloudWatch. These events include error messages reported by the TPE/BPE software in UniKix. With UCVO, users select which errors to take actions based on the error code. Like the UCVO Controller, actions are also AWS Lambda functions.
    .
    Following are the actions included in UCVO:

    • TPE action commands: kixdump, kixregion, kixsnap, kixtran
    • BPE action commands: BPESUB, ebmsnap, histprt, lstjcl, lststs, mbmlockstat, prsadm
    • Unix action commands: df, ip, netstat, ps
    • Notification action commands: SendNotification via email or text message

Summary

The development and release lifecycle of mainframe applications replatformed using the NTT DATA UniKix product suite on AWS can be accelerated by using a modern IDE, adding support for local testing, and automating build, deploy, test, and release by leveraging a CI/CD pipeline.

NTT DATA provides proven enterprise solutions for all stages of replatformed mainframe pipelines, and AWS provides managed services designed to enable enterprises to build and deliver more rapidly and reliably in a cost-optimized manner. The combined solution accelerates business agility and improves the quality as well as the efficiency of replatformed mainframe applications.

Contact NTT DATA to learn more about the UniKix Mainframe Replatforming product suite and how NTT DATA can help modernize legacy mainframe assets.

The content and opinions in this blog are those of the third-party author and AWS is not responsible for the content or accuracy of this post.

.
NTT-DATA-Services-APN-Blog-CTA-1
.


NTT DATA Services – AWS Partner Spotlight

NTT DATA is an AWS Premier Consulting Partner with the Mainframe Migration Competency that helps clients navigate and simplify the modern complexities of business and technology.

Contact NTT DATA | Partner Overview | AWS Marketplace

*Already worked with NTT DATA? Rate the Partner

*To review an AWS Partner, you must be a customer that has worked with them directly on a project.