Ahorra un 25 % (o incluso más) en tus costes de Kafka | Acepta el reto del ahorro con Kafka de Confluent
Change data capture is a popular method to connect database tables to data streams, but it comes with drawbacks. The next evolution of the CDC pattern, first-class data products, provide resilient pipelines that support both real-time and batch processing while isolating upstream systems...
Learn how the latest innovations in Kora enable us to introduce new Confluent Cloud Freight clusters, which can save you up to 90% at GBps+ scale. Confluent Cloud Freight clusters are now available in Early Access.
Learn how to contribute to open source Apache Kafka by writing Kafka Improvement Proposals (KIPs) that solve problems and add features! Read on for real examples.
The rise in schema-free and document-oriented databases has led some to question the value and necessity of schemas. Schemas, in particular those following the relational model, can seem too restrictive, […]
Testing is one of the hardest parts of building reliable distributed systems. Kafka has long had a set of system tests that cover distributed operation but this is an area […]
Summary: Confluent is starting to explore the integration of databases with event streams. As part of the first step in this exploration, Martin Kleppmann has made a new open source […]
This is an edited transcript of a talk given by Alan Woodward and Martin Kleppmann at FOSDEM 2015. Traditionally, search works like this: you have a large corpus of documents, […]
As part of Confluent Platform 1.0 released about a month ago, we included a new Kafka REST Proxy to allow more flexibility for developers and to significantly broaden the number […]
The Apache Kafka community just announced the 0.8.2.1 release. This is a a bug fix release and fixes 4 critical issues reported in the 0.8.2.0 release (the full list of […]
Note For the latest, check out the blog posts Apache Kafka® Made Simple: A First Glimpse of a Kafka Without ZooKeeper and Apache Kafka Supports 200K Partitions Per Cluster.
This is an edited and expanded transcript of a talk I gave at Strange Loop 2014. The video recording (embedded below) has been watched over 8,000 times. For those of […]
If you are getting started with Kafka one thing you’ll need to do is pick a data format. The most important thing to do is be consistent across your usage. […]
We are very excited to announce general availability of Confluent Platform 1.0, a stream data platform powered by Apache Kafka, that enables high-throughput, scalable, reliable and low latency stream data […]
This is the second part of our guide on streaming data and Apache Kafka. In part one I talked about the uses for real-time data streams and explained the concept of […]
Data systems have mostly focused on the passive storage of data. Phrases like “data warehouse” or “data lake” or even the ubiquitous “data store” all evoke places data goes to […]
Some people call it stream processing. Others call it event streaming, complex event processing (CEP), or CQRS event sourcing. Sometimes, such buzzwords are just smoke and mirrors, invented by companies […]
I am very excited to tell you about the forthcoming 0.8.2 release of Apache Kafka. Kafka is a fault-tolerant, low-latency, high-throughput distributed messaging system used in data pipelines at several […]