[Webinar] Unlock Data Value Framework for Data Products | Register Now
Change data capture is a popular method to connect database tables to data streams, but it comes with drawbacks. The next evolution of the CDC pattern, first-class data products, provide resilient pipelines that support both real-time and batch processing while isolating upstream systems...
Confluent Cloud Freight clusters are now Generally Available on AWS. In this blog, learn how Freight clusters can save you up to 90% at GBps+ scale.
Learn how to contribute to open source Apache Kafka by writing Kafka Improvement Proposals (KIPs) that solve problems and add features! Read on for real examples.
Get a high-level overview of source connector tuning: What can and cannot be tuned, and tuning methodology for any and all source connectors.
Learn the basics of what an Apache Kafka cluster is and how they work, from brokers to partitions, how they balance load, and how they handle replication, and leader and replica failures.
When developing streaming applications, one crucial aspect that often goes unnoticed is the default partitioning behavior of Java and non-Java producers. This disparity can result in data mismatches and inconsistencies, posing challenges for developers.
Learn when to consider expanding to multiple Apache Kafka clusters, how to manage the operations for large clusters, and tools and resources for efficient operations.
The term “event” shows up in a lot of different Apache Kafka® arenas. There’s “event-driven design,” “event sourcing,” “designing events,” and “event streaming.” What is an event, and what is the difference between the role an event has to play in each of these contexts?
We are proud to announce the release of Apache Kafka® 3.5.0. This release contains many new features and improvements. This blog post will highlight some of the more prominent features.
Companies are looking to optimize cloud and tech spend, and being incredibly thoughtful about which priorities get assigned precious engineering and operations resources. “Build vs. Buy” is being taken seriously again. And if we’re honest, this probably makes sense. There is a lot to optimize.
Operating Kafka at scale can consume your cloud spend and engineering time. And operating everyday tasks like scaling or deploying new clusters can be complex and require dedicated engineers. This post focuses on how Confluent Cloud is 1) Resource Efficient, 2) Fully Managed, and 3) Complete.
In part 2 of our blog series on understanding and optimizing your Kafka costs, we dive into how to estimate costs stemming from the development and operations personnel needed to self-manage Kafka.
It's hard to properly calculate the cost of running Kafka. In part 1 of 4, learn to calculate your Kafka costs based on your infrastructure, networking, and cloud usage.
If you’ve been working with Kafka Streams and have seen an “unknown magic byte” error, you might be wondering what a magic byte is in the first place, and also, how to resolve the error. This post explains the answers to both questions.
Get an introduction to why Python is becoming a popular language for developing Apache Kafka client applications. You will learn about several benefits that Kafka developers gain by using the Python language.
Discover tools, practices, and patterns for planning geo-replicated Apache Kafka deployments to build reliable, scalable, secure, and globally distributed data pipelines that meet your business needs.
Using Apache Kafka to decouple microservices is a successful way to build a more resilient, flexible, and scalable architecture. However, it is very common for such microservices to pair with a database. This blog provides a real-world use case on how Kafka replaces a database with ksqlDB.