Kafka in the Cloud: Why it’s 10x better with Confluent | Find out more
Change data capture is a popular method to connect database tables to data streams, but it comes with drawbacks. The next evolution of the CDC pattern, first-class data products, provide resilient pipelines that support both real-time and batch processing while isolating upstream systems...
Learn how the latest innovations in Kora enable us to introduce new Confluent Cloud Freight clusters, which can save you up to 90% at GBps+ scale. Confluent Cloud Freight clusters are now available in Early Access.
Learn how to contribute to open source Apache Kafka by writing Kafka Improvement Proposals (KIPs) that solve problems and add features! Read on for real examples.
At Current 2023, we announced that Confluent Cloud is now up to 10x faster than Apache Kafka®, thanks to Kora, The Cloud-Native Kafka engine that powers Confluent Cloud. In this blog post, we will cover what that means in more depth.
Our latest updates to Confluent Cloud focus on enabling customers to realize a seamless experience using our data streaming platform. With these improvements, we aim to provide a more streamlined and secure experience, allowing users to focus on leveraging real-time data to drive business outcomes.
Imagine easily enriching data streams and building stream processing applications in the cloud, without worrying about capacity planning, infrastructure and runtime upgrades, or performance monitoring. That's where our serverless Apache Flink® service comes in.
As companies increase their use of real-time data, we have seen the proliferation of Kafka clusters within many enterprises. Often, siloed application and infrastructure teams set up and manage new clusters to solve new use cases as they arise. In many large, complex enterprises, this organic growth
Today, we’re excited to announce the general availability of Data Portal on Confluent Cloud. Data Portal is built on top of Stream Governance, the industry’s only fully managed data governance suite for Apache Kafka® and data streaming.
Learn how to interact with the librdkafka library when sending and how to handle errors correctly. Take a deepdive into the internal mechanics of the library.
Stepping into the world of Apache Kafka® can feel a bit daunting at first. Get started with the top resources for beginners to start building your first Kafka application!
In this blog post, we will provide an overview of Confluent's new SQL Workspaces for Flink, which offer the same functionality for streaming data that SQL users have come to expect for batch-based systems.
The Apache Flink PMC is pleased to announce the release of Apache Flink 1.18.0. As usual, we are looking at a packed release with a wide variety of improvements and new features. Overall, 174 people contributed to this release completing 18 FLIPS and 700+ issues.
This series of blog posts will take you on a journey from absolute beginner (where I was a few months ago) to building a fully functioning, scalable application. Our example Gen AI application will use the Kappa Architecture as the architectural foundation.
Confluent Schema Registry, both in Confluent Platform Enterprise and Confluent Cloud, now supports the use of data contracts. In this article, we'll walk through an example of enhancing a schema to be a full-fledged data contract.
Apache Kafka 3.6 is here! This release includes Tiered Storage (Early Access), the ability to migrate clusters from ZooKeeper to KRaft with no downtime, the addition of a grace period to stream-table joins, & more!
Ever dealt with a misbehaving consumer group? Imbalanced broker load? This could be due to your consumer group and partitioning strategy!
Today, we’re introducing Confluent Cloud’s fully managed service for Apache Flink®, improvements to Kora Engine, how AI and streaming work together, and much more.