AWS Cloud Operations & Migrations Blog

Augmenting mainframe data with IBM MQ and Amazon Managed Streaming for Apache Kafka

Introduction

In this post, we explore the approach of integrating mainframe IBM MQ with Amazon Managed Streaming for Apache Kafka (Amazon MSK), to migrate your applications into a cloud-based consumer model. Amazon MSK is a fully managed Apache Kafka service from AWS that makes it simpler to set up and operate Kafka in the cloud. This solution can be leveraged by customers who have IBM MQ on-premises and are interested in using cloud-based Kafka service to build new applications or offload existing applications to consume data from the cloud.

IBM MQ customers are considering highly scalable cloud-based event driven integration solutions to augment existing on-premises functionality with cloud services. This takes advantage of the agility, flexibility, and reliability of the cloud. In recent years, Apache Kafka has been growing in popularity. In contrary to MQ systems or Web Service / API based architectures, Apache Kafka facilitates decoupling applications better than other messaging or integration middleware. While many companies are embracing Apache Kafka as their core event streaming platform, they still have events they want to unlock in on-premises systems. Kafka Connect provides a way to integrate with these on-premises systems, be it Mainframe, IBM MQ, Relational Databases, or others.

This integration allows data to be fetched from IBM MQ queue and pushed to a Kafka topic on AWS for further processing. Optionally, there might also be an interim requirement where customers want to write data back to on-premises IBM MQ for legacy applications to consume. In this post, we provide an overview of integrating your MQ subsystem with Amazon MSK and walk through step-by-step on how to configure the integration between IBM MQ and Amazon MSK and validate the bi-directional message flow. The same pattern can be used with other Message oriented middleware systems: Apache ActiveMQ, IBM MQ, JMS, TIBCO EMS for example.

Solution overview

In the current state, you are running producer and consumer applications connected to an on-premises IBM MQ middleware (Figure 1).
Figure 1 shows IBM applications running on mainframe
Figure 1. IBM MQ Applications running on mainframe

To offload some of the consumer applications on to AWS, you can integrate existing IBM MQ running on-premises with Amazon MSK using Kafka Connect. Kafka connectors seamlessly exchange data between IBM MQ queues and Amazon MSK topics (Figure 2).

Figure 2 shows how to integrate IBM MQ with Amazon MSK
Figure 2. Integrate IBM MQ with Amazon MSK

Once you have the data in AWS, you can begin by deploying consumer applications to consume from Kafka. These consumers can be new applications or additional instances of applications running on-premises. At this stage, message producers are still on-premises sending messages to the IBM MQ queues. You can iteratively continue to move more consumers as you realize the availability and scalability of the solution. The goal of this phase is to reach a state where messages are still produced on-premises with some consumers running on AWS. At this time, you can build net new capabilities on AWS.

In the later phase, you have an option to move both your producer and consumer applications consume from Kafka and retire your on-premises IBM MQ.

How it Works

Kafka Connect is a tool for streaming data between Apache Kafka and other systems. To integrate IBM MQ with Amazon MSK, you will use the Kafka Connector for IBM MQ. The connector is supplied as source code which you can easily build into a JAR file.

The Kafka Connector for IBM MQ integrates IBM MQ and Amazon MSK by continuously copying streaming data from a message source into your MSK cluster or by continuously copying data from your MSK cluster into a message sink. A connector can also perform lightweight logic such as transformation, format conversion, or filtering data before delivering the data to a destination. The Kafka Connect IBM MQ Source Connector pulls message from IBM MQ source and pushes this message into the Kafka topic. You might also have a requirement to send data back to your on-premises systems and IBM MQ Sink Connector can be leveraged to pull messages from the Kafka topic and push it to on-premises MQ.

MSK Connect is a feature of Amazon MSK that allows you to configure and deploy a connector using Kafka Connect with just a few clicks. MSK Connect provisions the required resources and sets up the cluster. It continuously monitors the health and delivery state of connectors, patches and manages the underlying hardware, and auto-scales connectors to match changes in throughput. As a result, you can focus your resources on building applications instead of managing infrastructure. MSK Connect is fully compatible with Kafka Connect (Figure 3).
Figure 3 shows how to integrate IBMMQ with Amazon MSK using Mq source and sink connectors

Figure 3. Integrate IBM MQ with Amazon MSK using MQ source and sink connectors

With IBM MQ running on-premises, the Kafka connector must communicate to the AWS Cloud by establishing network connectivity using either an AWS VPN tunnel or AWS Direct Connect.

Now that we understand the approach on how you can use MSK Kafka Connectors for decoupling producers and consumers, In the hands-on lab section, we will walk through the steps on how to configure the integration between IBM MQ and Amazon MSK and validate the bidirectional message flow. We will also discuss how to consume Amazon MSK stream data.

How to consume Amazon MSK stream data

Stream processing: Data is read in the order they are produced. The following AWS services can be used to consume stream data on Amazon MSK (Figure 4).

Stream destination: Once the consumers have consumed the stream data, the following are some of the stream destinations.

This figure show how to consume data from Amazon MSK using AWS services

Figure 4. How to consume data using AWS services

Hands-on lab

To create the connector using MSK Connect, you will need a custom plugin for the middleware integration subsystem, IBM MQ in this case. A plugin is an AWS resource that contains the code that defines your connector logic in the form of a JAR file. You upload a JAR file (or a ZIP file that contains one or more JAR files) to an Amazon S3 bucket. Specify the location of the bucket when you create the plugin. When you create a connector, you specify the plugin that you want MSK Connect to use for it.

The demo environment used for this lab has the following components as depicted in the diagram (Figure 5):

  • VPC A has
    • IBM MQ installed and running on IBM Z Development and Test Environment (ZD&T) on an EC2 instance. ZD&T runs a z/OS distribution on a personal computer or workstation Linux environment. It creates an environment for mainframe application demonstration, development and testing without Z mainframe hardware.
    • Queue manager configured with queues and channels
  • VPC B has
    • Amazon MSK cluster, Amazon managed Kafka Service and Connectors.  You can use the implementation guide and the accompanying AWS CloudFormation templates to provision MSK cluster and Kafka client on Amazon EC2.
    • Amazon EC2 Linux instance with Kafka client and Kafdrop tools.
    • Amazon EC2 Windows instance with MSoT IBM MQ explorer.
  • VPC peering is established between VPC A and VPC B

This is the environment setup of MQ running on ZD&T and Amazon MSKFigure 5. Environment setup for proof of concept

Perform the following steps to configure and validate the integration and message flow.

Step 1: Build the custom plugins for IBM MQ

A custom plugin is a set of JAR files that contain the implementation of one or more connectors, transforms, or converters. Amazon MSK will install the plugin on the workers of the connect cluster where the connector is running. We downloaded the IBM MQ connector plugin from GitHub.

Use the readme file for the source and sink connector to build the JAR file. You will only have to follow the steps in this section: Building the connector.

Step 2: Upload the source and sink JAR files to the Amazon S3 bucket.

Use the AWS Command Line Interface (CLI) to upload the custom plugin into an S3 bucket in the same AWS Region .

  • $ aws s3 cp kafka-connect*.zip s3://my-bucket

Note: You will need to use the JAR files with dependencies when you create the custom plugin in the next step.

Step 3: Create the custom plugin using the AWS Management Console or Managed Kafka Connect API steps in this guide: MSK ConnectCustom Plugin

From the AWS console, go to your MSK cluster. We choose the custom plugin option and select the JAR file for the source connector uploaded in step 2 to the Amazon S3 bucket.

  • Create the source connector by clicking on the create button (Figure 6)

Figure 6 shows the AWS console screen for creating the MQ Source Connector plugin in Amazon MSK

Figure 6. Create MQ Source Connector plugin

  • Follow the same steps for creating the plugin for the Sink Connector plugin.
  • You will now see that the source and sink connectors are active (Figure 7).

Figure 7 AWS MSK console screen showing custom plugins are created and Active

Figure 7. Active IBM MQ Source and Sink connector plugins.

Step 4: Create and configure the connectors using MSK Connect

In this step, you will need the IBM MQ host and queue information and the Kafka topic names created earlier. This is the configuration that we will need to edit based on our environment.

connector.class=com.ibm.eventstreams.connect.mqsource.MQSourceConnector
mq.connection.name.list=(port)
tasks.max=1
mq.queue.manager=
mq.queue=
mq.channel.name=
mq.record.builder=com.ibm.eventstreams.connect.mqsource.builders.DefaultRecordBuilder
topic=
value.converter=org.apache.kafka.connect.converters.ByteArrayConverter
key.converter=org.apache.kafka.connect.storage.StringConverter

The settings below are generic and specified for any connector. For example:

connector.class is the Java class of the connector.
tasks.max is the maximum number of tasks that should be created for this connector.

These below settings are specific to the IBM MQ connector.

mq.queue.manager is the name of the MQ queue manager
mq.queue is the name of the MQ queue
mq.connection.name.list contains host and port on which MQ is started
mq.channel.name is the name of the server-connection channel
topic name is the name of the kafka topic

These below settings are for data format:
value.converter
key.converter
mq.record.builder is the class used to build the MQ message

Step 4.1: Create the source connector to send messages from IBM MQ to the MSK Topic

  • In the AWS console, search for the MSK service and use MSK connect. There is an option to select cluster type. The options are self-managed Apache Kafka cluster or one that is managed by MSK. We select: MSK cluster (Figure 8).

Select the cluster type for MSK managed cluster.Figure 8. Create the source connector with Amazon MSK Connect

  • Use this sample connector configuration but plug in specific values as per our environment (Figure 9).

IBM MQ configuration values of our environment and Kafka topic that we have configured
Figure 9. IBM MQ and Kafka topic configuration for source connector

  • Use default values for the connector capacity and worker configuration.
  • For access permissions, we created an IAM role with the policies per documentation. In the trusted entities, we add amazon.aws.com to allow MSK Connect to assume the role.

Figure 10 shows creation of IAM role for access permissions
Figure 10. Create IAM role for access permissions

  • In Security Policies select: Plaintext traffic.
  • Next, we activate logging to get more information on the execution of the connector (Figure 11).
  • In Logs, we choose to deliver logs to CloudWatch Logs
  • Specify the log group ARN and select: Create Connector.

Figure 11 shows the screen to enable cloudwatch logging

Figure 11. Activate CloudWatch logging

  • Creation of the connector takes a few minutes. You can view progress by looking in the CloudWatch logs.

Step 4.2: Create the sink connector to write messages from the MSK Topic to an IBM MQ queue 

  • Select the custom plugin for the sink connector (Figure 12).

Figure 12 shows creation of IBMMQ sink connector using the plugin
Figure 12. Select plugin to create IBM MQ sink connector

  • Use this sample configuration for the sink connector (Figure 13).

IBM MQ and Kafka topic configuration for the sink connector for our environment

Figure 13. Connector configuration for the sink connector. 

  • The rest of the configuration options should be same as source connector creation in step 4.1.
  • Once the connectors are created, you will see them in a running state. For issues with the creation, the connector goes to a failed state, and CloudWatch logs need to be reviewed.

Step 5: Validate messages are flowing between MQ and Kafka

 To test the integration we just configured, use the Kafka client on Amazon EC2 to publish and consume data from the topic. If you used the quick start to provision Amazon MSK, you will have the Kafka client on Amazon EC2. If not, you can create Kafka client machine on the Amazon EC2 instance following the steps in the MSK Developer Guide

Test scenario A: Publish message from IBM MQ on-premises to Amazon MSK

Publish a message to the queue: MQ.TO.MSK.QUEUE and validate messages are published to the Kafka topic: FromIBMMQTOPIC. The client tools used in this example for validation are Kafdrop and IBM MQ Explorer.

    • Validate the queue TO.MSK.QUEUE exists on IBM MQ Queue Manager by using IBM MQ Explorer.
    • Put a message to the IBM MQ queue: TO.MSK.QUEUE using IBM MQ Explorer (Figure 14). 

Put a test message using MQ Explorer

Figure 14. Put a message on MQ queue to test source connector.

    • You should see the messages in the Kafka topic: FromIBMMQTOPIC.
    • You can view the messages in the Kafka topic using Kafdrop (Figure 15).

Using Kafdrop tool to view messages in Kafka topicFigure 15. View messages in Kafka topic using Kafdrop 

Alternatively, you can also use Kafka client to see if the message is published to the topic. On your Amazon EC2 instance, go to Kafka installation directory and run the following command:

<kafka-client-install dir>/kafka-console-consumer.sh –from-beginning –bootstrap-server <bootstrap server connection endpoint> –topic FromIBMMQTOPIC 

Test scenario B: Write back from Amazon MSK to an on-premises MQ system for on-premises applications to consume.

Here we will publish a message to the Kafka topic: ToIBMMQTOPIC and a message will be sent to the IBM MQ queue: FROM.MSK.TO.MQ.QUEUE.

    • On your IBM MQ queue manager, check that the queue exists: MSK.TO.MQ.QUEUE
    • Publish a message to the MSK topic using the Kafka client machine on the Amazon EC2 instance.
      • <kafka client install dir>/bin/kafka-console-producer.sh –broker-list <bootstrap server connection endpoint> –topic TOIBMMQTOPIC
    • Type the test message to publish at the ‘>’ prompt:
      • >This is test to write back to on-premises, “12/14/2023 05:30:00”
      • >This is test#2 on “12/14/2023 05:32:00”
    • Check if the messages arrived on the IBM MQ queue. You can see from the below screenshot the messages are written to: FROM.MSK.TO.MQ.QUEUE.

In this section, we configured and validated the data flow between IBM MQ and Amazon managed Apache Kafka and the connector is fully managed by MSK Connect.

Conclusion

In this blog, we provided an approach to migrate producers and consumer applications from IBM MQ to an AWS managed Kafka service. The approach relies on leveraging IBM MQ Kafka connectors as an integrator to allow messages to flow between on-premises and AWS. This allows a phased approach to validate and migrate legacy applications in an iterative manner and build new applications to consume data from Amazon MSK.

Author Bio

Malathi Pinnamaneni

Malathi Pinnamaneni is a Senior Modernization Solutions Architect at AWS where she helps her customers innovate and transform their customer experiences through the adoption of serverless and event-driven architectures. She leverages her background in databases and middleware technologies along with her passion for developing reusable architecture reference patterns to accelerate customers’ journeys to AWS.

Subhajit Maitra

Subhajit is the Worldwide Mainframe Partner Solution Architect at AWS and helped build the Mainframe Modernization competency program. He is also a builder for AWS Mainframe Modernization service contributing to the IBM MQ integration. His areas of specialization includes Mainframe modernization, Message-Oriented Middleware, Distributed Event Streaming platform and Microservices.