Change data capture is a popular method to connect database tables to data streams, but it comes with drawbacks. The next evolution of the CDC pattern, first-class data products, provide resilient pipelines that support both real-time and batch processing while isolating upstream systems...
Learn how the latest innovations in Kora enable us to introduce new Confluent Cloud Freight clusters, which can save you up to 90% at GBps+ scale. Confluent Cloud Freight clusters are now available in Early Access.
Learn how to contribute to open source Apache Kafka by writing Kafka Improvement Proposals (KIPs) that solve problems and add features! Read on for real examples.
This week in Las Vegas, Google is hosting their annual Google Cloud Next ʼ24 conference, bringing together leaders from across the IT industry to learn about and explore the newest innovations in and around the Google Cloud ecosystem.
Let’s learn more about Bergur, the importance of customer engagement, and how Confluent fosters an environment for innovation and growth.
Confluent has been named a Leader in two different IDC MarketScape reports - the IDC MarketScape for Worldwide Analytic Stream Processing Software 2024 Vendor Assessment, and the IDC MarketScape for Worldwide Event Brokering Software 2024 Vendor Assessment.
Acquiring a single prescription medication is a complex end-to-end process that requires the seamless orchestration of data across drug manufacturers, distributors, providers, and pharmacies in the healthcare ecosystem.
Let’s learn how she got to Confluent and the initiative she has undertaken for advancing women in tech.
Learn how to track events in a large codebase, GitHub in this example, using Apache Kafka and Kafka Streams.
For telecommunication companies (telcos) facing risks of equipment failures, software misconfiguration, network overload, and power outages, annual service outage costs can exceed billions of dollars. How can some of these costs be avoided?
The journey from data mess to data mesh is not an easy one—that’s why we’ve written a new ebook as a practical guide to help you navigate the challenges and learn how to successfully implement a data mesh using Confluent Data Streaming Platform
Let’s learn more about Amy, how she enables customer success, some of the cool customer use cases she gets to work on—and how Confluent helps her stay motivated.
Learn best practices for using Confluent Schema Registry, including understanding subjects and versions, pre-registering schemas, using data contracts, and more.
In a world increasingly driven by data, the revolutionary power of real-time data streaming applications cannot be denied. Five months ago, Confluent announced the Data Streaming Startup Challenge
As presenters took the keynote stage at this year’s Kafka Summit in London, an undercurrent of excitement was “streaming” through the room. With over 3,500 people in attendance, both in person and online, the Apache Kafka® community came out...
Learn about our vision for Tableflow, a new feature in private early access that makes it push-button simple to take Apache Kafka® data and feed it directly into your data lake, warehouse, or analytics engine as Apache Iceberg® tables.
We're thrilled to announce the general availability of Confluent Cloud for Apache Flink across all three major clouds. This means that you can now experience Kafka and Flink as a unified, enterprise-grade platform to connect and process your data in real time.