Level Up Your Kafka Skills in Just 5 Days | Join Season of Streaming

Presentation

Real-Time Inter-Agency Data Sharing With Kafka

« Current 2022

US Government agencies are required to share large volumes of data to enable them to execute on their critical missions. Sharing data across agencies is required for implementing US immigration and naturalization processes, issuing passports and Visas, and assimilating border migrants, asylum seekers, and refugees into the US.

The challenge is that current processes for sharing data are not real-time and require agencies to establish a set of queries to gain access to data through request-response models. In addition, the majority of data today is stored in legacy transactional systems. Therefore, US government agencies are required to access or replicate the data from the transactional systems. That being said, the data is so siloed and the systems are so brittle that US government agencies have found themselves being asked to leverage spreadsheets to share data in times of crisis such as the Afghanistan airlift of 2021.

Kafka has transformed how government agencies are able to share data in real-time. Due to Kafka’s ability to decouple services from one another and share data leveraging a publish/subscribe model, US government agencies are now able to share data in real-time and realize the benefits outlined below:

  • Speed of onboarding new data sets: Reduced time and labor required to onboard new data sets due to policy changes, etc.
  • Real-time event notification: Changes in transactional systems shared in real-time with the analytics/visualization or other systems
  • Cost reduction for data sharing: Cost savings and operational efficiencies that can be realized by sharing changes only vs. continuously querying the same data sets
  • Data quality: Enhancing and enriching the data sets being sent and received

Related Links

How Confluent Completes Apache Kafka eBook

Leverage a cloud-native service 10x better than Apache Kafka

Confluent Developer Center

Spend less on Kafka with Confluent, come see how