Ahorra un 25 % (o incluso más) en tus costes de Kafka | Acepta el reto del ahorro con Kafka de Confluent
Change data capture is a popular method to connect database tables to data streams, but it comes with drawbacks. The next evolution of the CDC pattern, first-class data products, provide resilient pipelines that support both real-time and batch processing while isolating upstream systems...
Learn how the latest innovations in Kora enable us to introduce new Confluent Cloud Freight clusters, which can save you up to 90% at GBps+ scale. Confluent Cloud Freight clusters are now available in Early Access.
Learn how to contribute to open source Apache Kafka by writing Kafka Improvement Proposals (KIPs) that solve problems and add features! Read on for real examples.
Discover tools, practices, and patterns for planning geo-replicated Apache Kafka deployments to build reliable, scalable, secure, and globally distributed data pipelines that meet your business needs.
An Approach to combining Change Data Capture (CDC) messages from a relational database into transactional messages using Kafka Streams.
This post details how to minimize internal messaging within Confluent platform clusters. Service mesh and containerized applications have popularized the idea of control and data planes. This post applies it to the Confluent platform clusters and highlights its use in Confluent Cloud.
Using Apache Kafka to decouple microservices is a successful way to build a more resilient, flexible, and scalable architecture. However, it is very common for such microservices to pair with a database. This blog provides a real-world use case on how Kafka replaces a database with ksqlDB.
This article summarizes dynamic versus static consumer group membership in Apache Kafka. It shows how the approaches affect rebalancing in heavy state applications and teaches the user how to choose between the methods.
Who isn’t familiar with Michelin? Whether it’s their extensive product line of tires for nearly every vehicle imaginable (including space shuttles), or the world-renowned Michelin Guide that has determined the standard of excellence for fine dining for over 100 years, you’ve probably heard of them.
Learn what windowing is in Kafka Streams and get comfortable with the differences between the main types.
Apache Kafka 3.4 includes early access to ZooKeeper to KRaft migrations, enabling existing Kafka clusters to migrate to KRaft mode and gain scalability and resiliency benefits. Additionally, 3.4 includes several updates to Kafka Core, Streams, Connect, and more.
When I first joined Confluent, I just wanted to make an impact. Of course, everyone wants to grow in their careers, but especially as a proud first-generation American from a Hispanic family, being able to get my engineering degree and find a successful role at a tech company has been deeply
Announcing the latest updates to Confluent’s cloud-native data streaming platform, centralized identity management, enhanced RBAC, Client Quotas, and more.
Confluent is pleased to announce that the Confluent CLI—the leading command-line tool for managing enterprise Kafka deployments and modern data flow—is now source available under the Confluent Community License.
At Treehouse Software, when we speak with customers who are planning to modernize their enterprise mainframe systems, there’s a common theme: they are faced with decades of mission-critical and historical legacy mainframe data in disparate databases,
Building data streaming applications, and growing them beyond a single team is challenging. Data silos develop easily and can be difficult to solve. The tools provided by Confluent’s Stream Governance platform can help break down those walls and make your data accessible to those who need it.