Développez l'apprentissage automatique prédictif avec Flink | Atelier du 18 déc. | S'inscrire
Change data capture is a popular method to connect database tables to data streams, but it comes with drawbacks. The next evolution of the CDC pattern, first-class data products, provide resilient pipelines that support both real-time and batch processing while isolating upstream systems...
Learn how the latest innovations in Kora enable us to introduce new Confluent Cloud Freight clusters, which can save you up to 90% at GBps+ scale. Confluent Cloud Freight clusters are now available in Early Access.
Learn how to contribute to open source Apache Kafka by writing Kafka Improvement Proposals (KIPs) that solve problems and add features! Read on for real examples.
Our new PII Detection solution enables you to securely utilize your unstructured text by enabling entity-level control. Combined with our suite of data governance tools, you can execute a powerful real-time cyber defense strategy.
Check out how data streaming works and how it helps businesses act, not react, to what's happening in real time. Get the data streaming basics and explore how businesses are getting started.
"It really is an awesome time for this community,” said Jay Kreps, CEO of Confluent, in his opening keynote to over 1,500 live attendees (and 2k+ virtual viewers) from 50+ countries at Kafka Summit London.
Announcing the latest updates to Confluent’s cloud-native data streaming platform: Kora Engine, Data Quality Rules, Custom Connectors, Streaming Sharing, and more.
This year, we crossed an important threshold: data streaming is now considered a business requirement for organizations across many industries. Findings from the 2023 Data Streaming Report show that 72% of the 2,250 IT leaders surveyed are using data streaming to power mission-critical systems.
Take a tour of the internals of Confluent’s Apache Kafka® service, powered by Kora: the next-generation, cloud-native streaming engine.
Our business at Loggi has grown a lot over the past few years, and with that expansion came the realization that our systems had to be more distributed. We pushed our architecture to a new level so we could keep up with the company's growth by building new event-driven systems and real-time data
Companies are looking to optimize cloud and tech spend, and being incredibly thoughtful about which priorities get assigned precious engineering and operations resources. “Build vs. Buy” is being taken seriously again. And if we’re honest, this probably makes sense. There is a lot to optimize.
Why do our customers choose Confluent as their trusted data streaming platform? In this blog, we will explore our platform’s reliability, durability, scalability, and security by presenting some remarkable statistics and providing insights into our engineering capabilities.
Operating Kafka at scale can consume your cloud spend and engineering time. And operating everyday tasks like scaling or deploying new clusters can be complex and require dedicated engineers. This post focuses on how Confluent Cloud is 1) Resource Efficient, 2) Fully Managed, and 3) Complete.
Data streaming capabilities are transforming everything, from allowing you to see when your ride will arrive to powering curbside pickups of groceries. The immediacy and personalization of those commercial experiences are fast becoming the expectations when using public services and healthcare, too.
Companies in nearly every industry are using Apache Kafka to harness their streaming data and deliver rich customer experiences and real-time business insights. In fact, Kafka has become so widely accepted as the de facto technology for data streaming, that it’s now used by over 70% of the Fortune
The blog introduces Confluent Platform 7.4 and its key features, including enhancing scalability, increasing architectural simplicity, accelerating time to market, reducing ops burden, and ensuring high-quality data streams. It also covers what's new in Apache Kafka 3.4.
In part 2 of our blog series on understanding and optimizing your Kafka costs, we dive into how to estimate costs stemming from the development and operations personnel needed to self-manage Kafka.