Change data capture is a popular method to connect database tables to data streams, but it comes with drawbacks. The next evolution of the CDC pattern, first-class data products, provide resilient pipelines that support both real-time and batch processing while isolating upstream systems...
Learn how the latest innovations in Kora enable us to introduce new Confluent Cloud Freight clusters, which can save you up to 90% at GBps+ scale. Confluent Cloud Freight clusters are now available in Early Access.
Learn how to contribute to open source Apache Kafka by writing Kafka Improvement Proposals (KIPs) that solve problems and add features! Read on for real examples.
Build with Confluent helps system integrators develop joint solutions faster, including specialized software bundles, support from data streaming experts to certify offerings, and access to Confluent’s Go-To-Market (GTM) teams to amplify their offering in the market.
Part two in the series on using FlinkSQL, Kafka, and Streamlit dives into async.io, FlinkSQL syntax, and Streamlit barchart component structure.
Let’s learn more about Jennifer, how she thrives on solving complex people, process, and technology challenges—and how Confluent provides the support and motivation she needs to excel at her role.
In the past, technology served as a supportive function for business. Over time, it has become the business itself. A similar shift is happening with data streaming—data streaming is now a critical foundation of modern business. And this year is an inflection point for data streaming platforms
In part 1 of this series, we’ll make an app, powered by Kafka and FlinkSQL in Confluent Cloud and visualized with Streamlit, that allows a user to select a stock, in this case SPY, or the SPDR S&P 500 ETF Trust. Upon selection, a live chart of the stock’s bid prices, calculated every five seconds...
Businesses that are best able to leverage data have a significant competitive advantage. This is especially true in financial services, an industry in which leading organizations are in constant competition to develop the most responsive, personalized customer experiences.
The post discusses the Dual-Write Problem in distributed systems, where atomic updates across multiple systems like databases and messaging systems (e.g., Apache Kafka) are challenging, leading to potential inconsistencies. It outlines common anti-patterns that fail to address the issue...
Find out how Zhibo handles tricky conversations with customers who aren’t quite sure what data problems they have and how Confluent can help.
If you know me, you know that I’m always looking for any excuse to bring the data streaming community together.
The blog post delves into best practices and recommendations for utilizing the Confluent Terraform Provider. It offers insights on efficiently provisioning resources within Confluent Cloud infrastructure while ensuring adherence to industry standards. Additionally, it provides a GitHub repository...
The Data Streaming Awards is back for its third year! Designed to bring the data streaming community together, this one-of-a-kind industry award event recognizes organizations that are harnessing the power of this revolutionary technology to drive business and customer experience transformation.
Analyzing Confluent Cloud audit logs is good, but being proactively informed once something suspicious is happening is better. This article provides a conceptual guide for developing a pipeline that transfers Confluent Cloud audit logs into Splunk and defines automatic alerts based on certain events
Apache Kafka® has become the de facto standard for data streaming, used by organizations everywhere to anchor event-driven architectures and power mission-critical real-time applications.