[Webinar] How to Protect Sensitive Data with CSFLE | Register Today
Change data capture is a popular method to connect database tables to data streams, but it comes with drawbacks. The next evolution of the CDC pattern, first-class data products, provide resilient pipelines that support both real-time and batch processing while isolating upstream systems...
Learn how the latest innovations in Kora enable us to introduce new Confluent Cloud Freight clusters, which can save you up to 90% at GBps+ scale. Confluent Cloud Freight clusters are now available in Early Access.
Learn how to contribute to open source Apache Kafka by writing Kafka Improvement Proposals (KIPs) that solve problems and add features! Read on for real examples.
Building a headless data architecture requires us to identify the work we’re already doing deep inside our data analytics plane, and shift it to the left. Learn the specifics in this blog.
A headless data architecture means no longer having to coordinate multiple copies of data, and being free to use whatever processing or query engine is most suitable for the job. This blog details how it works.
Event design plays a big role in your ability to fix bad data in your streams. But if you’ve wrecked a stream with bad data (i.e., it’s unavoidably contaminated), you'll need to employ a "rewind, rebuild, and retry" strategy.
At a high level, bad data is data that doesn’t conform to what is expected, and it can cause serious issues and outages for all downstream data users. This blog looks at how bad data may come to be, and how we can deal with it when it comes to event streams.
Versioned key-value state stores, introduced to Kafka Streams in 3.5, enhance stateful processing capabilities by allowing users to store multiple record versions per key, rather than only the single latest version per key as is the case for existing key-value stores today...
This blog post discusses the two generals problems, how it impacts message delivery guarantees, and how those guarantees would affect a futuristic technology such as teleportation.
Stream processing has long forced an uncomfortable trade-off: choose a framework based on its power, or in your preferred programming language. GraalVM may offer an alternative solution to avoid having to choose.
An Approach to combining Change Data Capture (CDC) messages from a relational database into transactional messages using Kafka Streams.
Change data capture (CDC) converts all the changes that occur inside your database into events and publishes them to an event stream. You can then use these events to power analytics, drive operational use cases, hydrate databases, and more. The pattern is enjoying wider adoption than ever before.
In this post, we introduce how to use .NET Kafka clients along with the Task Parallel Library to build a robust, high-throughput event streaming application...
This Thanksgiving-themed blog post walks through a brand new stream processing use case recipe for analyzing survey responses in real-time and gives ideas for how to spice it up and make the recipe your own!
Joining two topics to aggregate the data is one of the fundamental operations in stream processing. But that's not to say that it's simple. Let me show you what can go wrong!
We are excited to announce the ksqlDB 0.28.2 release as well as new cloud-specific improvements! This release simplifies the getting started experience, helps to run and monitor critical pipelines
Serverless stream processing with Apache Kafka® is a powerful yet often underutilized field. Microsoft’s Azure Functions, in combination with ksqlDB and Confluent’s sink connector, provide a powerful and easy-to-use set […]