[ウェビナー] Confluent と AWS を使用して生成 AI スタックを構築 | 今すぐ登録

State Unemployment Systems Are Overwhelmed. Mainframe Offload with Apache Kafka Can Relieve the Pressure.

作成者 :

COVID-19 has created extraordinary challenges in virtually every industry. While much attention has been focused on the pandemic’s effect on travel, hospitality, airline, and healthcare industries, its impact on state and federal agencies has been just as profound. With millions of Americans suddenly out of work, state unemployment systems are struggling and many are buckling under the surge in demand leading to system failures and slowdowns.

In response to the crisis, the governor of New Jersey issued a plea for developers who know COBOL, the 60-year-old language used to develop the state’s unemployment system. While aging code bases are certainly part of the problem, a major contributing factor is the data and where it lives: locked away on mainframes. Efforts to scale overwhelmed unemployment systems are running headlong into the difficulties associated with getting data out of the mainframe. New Jersey and other states have found that as employees have relocated or retired over the decades, the tribal knowledge needed to maintain the systems—and access the data—has gone with them.

Any organization that has monolithic systems and brittle, aging code is susceptible to being hit with a new and unprecedented wave of demand. The pandemic is shining a bright light on the problem, as the failure of unemployment agencies to meet that demand is compounding the difficulties of those who have just lost their jobs.

Apache Kafka for mainframe offload

The good news is that this problem is not intractable. IT groups across many industries have begun using event streaming with Apache Kafka® to free data from its mainframe quarantine. As a result, these groups are able to make use of the data in modern, scalable applications that can readily be updated and adapted to meeting evolving demands and technology requirements.

With Kafka acting as a buffer between the mainframe and newer services, many of these mainframe offload initiatives can be completed relatively inexpensively without rewriting code or calling for a regiment of heroic COBOL developers to come out of retirement. On the scalability front, Kafka can persist data and play it back later, so when a spike in demand causes more data to flow in than a backend system can handle at once, Kafka can absorb that data for as long as needed and feed it steadily into the backend system without overloading it.

While Kafka by itself can address many of the mainframe-centric technical challenges facing unemployment agencies and other organizations struggling to keep up with the demands the pandemic has imposed upon them, clearing operational hurdles and meeting other environment-specific needs may require additional capabilities. Many agencies, for example, lack the personnel and resources to deploy and manage Kafka clusters on their own.

For these organizations, and others that have adopted a cloud-first mindset, having a fully managed cloud-native Kafka service such as Confluent Cloud can not only serve as a springboard for rapid deployment but also reduce maintenance overhead and manpower requirements in the long term. For environments running Kubernetes, Confluent Platform automates the deployment of Kafka on this runtime via the Confluent Operator, enabling a team to set up a production-ready event streaming platform in minutes, on premises, or in the cloud. In any environment, Control Center makes it easy to manage and monitor Kafka, enabling teams to track the health of their clusters and identify potential problem areas during peak loads.

Further, because the data stored on agency mainframes often includes sensitive personal information, it’s important to control access to this data as it is streamed—and Role-Based Access Control with Confluent Platform enables organizations to set up rules that do just that. Looking forward, past the immediate needs of the present situation, there are opportunities to use the newly available data in innovative ways, for example by using ksqlDB to enrich it and curate it midstream and in real time with simple SQL statements, in order to provide new and better services to the citizens being served.

Learn more about event streaming

If you want to learn more about making data from your mainframe systems available in a modern, event-driven architecture, this blog post that covers the entire journey is a good place to start. Plenty of real-world use cases are available as well. Alight Solutions, for example, lowered costs by offloading work and reducing demand on mainframe systems. RBC used Confluent Platform to rescue data off of its accumulated IT assets, including its mainframe, with a cloud-native, microservice-based approach. This Express Scripts online talk describes the company’s transformation from mainframe to a microservices-based ecosystem using Kafka and change data capture (CDC) technology. Finally, when you’re ready to get into the how-to details, check out the online talk on Mainframe Integration, Offloading and Replacement with Apache Kafka.

このブログ記事は気に入りましたか?今すぐ共有

Win the CSP & MSP Markets by Leveraging Confluent’s Data Streaming Platform and OEM Program

This blog explores how cloud service providers (CSPs) and managed service providers (MSPs) increasingly recognize the advantages of leveraging Confluent to deliver fully managed Kafka services to their clients. Confluent enables these service providers to deliver higher value offerings to wider...


Atomic Tessellator: Revolutionizing Computational Chemistry with Data Streaming

With Confluent sitting at the core of their data infrastructure, Atomic Tessellator provides a powerful platform for molecular research backed by computational methods, focusing on catalyst discovery. Read on to learn how data streaming plays a central role in their technology.