Level Up Your Kafka Skills in Just 5 Days | Join Season of Streaming
Change data capture is a popular method to connect database tables to data streams, but it comes with drawbacks. The next evolution of the CDC pattern, first-class data products, provide resilient pipelines that support both real-time and batch processing while isolating upstream systems...
Learn how the latest innovations in Kora enable us to introduce new Confluent Cloud Freight clusters, which can save you up to 90% at GBps+ scale. Confluent Cloud Freight clusters are now available in Early Access.
Learn how to contribute to open source Apache Kafka by writing Kafka Improvement Proposals (KIPs) that solve problems and add features! Read on for real examples.
In this post, the second in the Kafka Producer and Consumer Internals Series, we follow our brave hero—a well-formed produce request—which is on its way to be processed by the broker and have its data stored on the cluster.
We covered so much at Current 2024, from the 138 breakout sessions, lightning talks, and meetups on the expo floor to what happened on the main stage. If you heard any snippets or saw quotes from the Day 2 keynote, then you already know what I told the room: We are all data streaming engineers now.
We’re excited to announce Early Access for Confluent for VS Code. This Visual Studio integration streamlines workflows, accelerates development, and enhances real-time data processing, all in a unified environment. This post shows how to get started, and also lists opportunities to get involved.
The Q3 Cloud Bundle Launch comes to you from Current 2024, where data streaming industry experts have come together to show you why data streaming is critical today, especially in the age of AI, and how it will become even more important in shaping tomorrow’s businesses...
If you are a developer looking for an easier way to test your apps on topics with schemas, this is for you! Now you can easily create a message with a topic schema directly from the Confluent Cloud Console, with built-in validation and error checking.
The beauty of Kafka as a technology is that it can do a lot with little effort on your part. In effect, it’s a black box. But what if you need to see into the black box to debug something? This post shows what the producer does behind the scenes to help prepare your raw event data for the broker.
With AI model inference in Flink SQL, Confluent allows you to simplify the development and deployment of RAG-enabled GenAI applications by providing a unified platform for both data processing and AI tasks. Learn how you can use it to build a RAG-enabled Q&A chatbot using real-time airline data.
62% of Confluent Cloud clusters run on AWS. Meanwhile, hundreds of thousands of customers are using DynamoDB. This blog explains how the connector helps customers integrate both platforms together.
Since launching our first cloud connector in 2019, Confluent’s fully managed connectors have handled hundreds of petabytes of data & expanded to include over 80 fully managed connectors, custom connectors, and private networking. Discover popular connectors, SMTs, and use cases on Confluent Cloud...
Been searching far and wide for examples of Spring Boot with Kotlin integrated with Apache Kafka®? You’ve found it. But not just an example with unstructured data or no schema management. Not here! We’re going all the way with Stream Governance in Confluent Cloud. Let’s get into it.
Skai completely revamped its interactive, ad-campaign dashboard by adding Apache Kafka and an in-memory database—eventually moving the solution to Confluent Cloud. Once on the Cloud, they devised an ingenious architecture for reducing the number of topics they needed.
We are excited to announce the release of a new Confluent Cloud Homepage UI, inspired by many conversations and features requests from our customer and field teams. In the past, many users bypassed the Homepage as just another click in the way of what they are trying to accomplish. This redesign...
Learn how Confluent Cloud and BigQuery Continuous Queries work together to enable real-time data processing, including the benefits of the integration with BigQuery Continuous Query and a step-by-step guide on setting up getting data from BQ Continuous Query and Confluent Cloud to capture data...
The Apache Flink® community released Apache Flink 1.20 this week. In this blog post, we highlight some of the most interesting additions and improvements.