Change data capture is a popular method to connect database tables to data streams, but it comes with drawbacks. The next evolution of the CDC pattern, first-class data products, provide resilient pipelines that support both real-time and batch processing while isolating upstream systems...
Learn how the latest innovations in Kora enable us to introduce new Confluent Cloud Freight clusters, which can save you up to 90% at GBps+ scale. Confluent Cloud Freight clusters are now available in Early Access.
Learn how to contribute to open source Apache Kafka by writing Kafka Improvement Proposals (KIPs) that solve problems and add features! Read on for real examples.
Kafka Summit New York City was the largest gathering of the Apache Kafka community outside Silicon Valley in history. Over 500 “Kafkateers” from 400+ companies and 20 countries converged in […]
Apache Kafka® is the best enterprise streaming platform that runs straight off the shelf. Just point your client applications at your Kafka cluster and Kafka takes care of the rest: […]
Today, I’m really excited to announce Confluent CloudTM, Apache Kafka® as a Service: the simplest, fastest, most robust and cost effective way to run Apache Kafka in the public cloud. […]
The team at Confluent, along with the Apache KafkaTM community, are excited the day is finally here – it’s time for Kafka Summit NYC! We’ll be at the Midtown Hilton […]
Ever wondered what it’s like to run Kafka in production? What about building and deploying microservices that process streams of data in real-time, at large scale? Or, maybe just the […]
In Q1, Confluent conducted a survey of the Apache Kafka® community and those using streaming platforms to learn about their application of streaming data. This is our second execution of […]
The Google Dataflow team has done a fantastic job in evangelizing their model of handling time for stream processing. Their key observation is that in most cases you can’t globally […]
In our previous post on the Streaming Pipelines track, we highlighted some of the sessions not to be missed at Kafka Summit NYC. As a follow on to that, let’s […]
Here at Confluent, our goal is to ensure every company is successful with their streaming platform deployments. Oftentimes, we’re asked to come in and provide guidance and tips as developers […]
Pandora began adoption of Apache Kafka® in 2016 to orient its infrastructure around real-time stream processing analytics. As a data-driven company, we have a several thousand node Hadoop clusters with hundreds of Hive tables critical to Pandora’s operational and reporting success...
It is just a few weeks out until Kafka Summit NYC! Since we’re on the program committee for this event and are also the track leads for the Streaming Pipelines track, […]
Note: The blog post Ensure Data Quality and Data Evolvability with a Secured Schema Registry contains more recent information. If you use Apache Kafka to integrate and decouple different data […]
It’s Strata+Hadoop World this week, and the who’s who of the data management world will gather at the San Jose Convention Center to talk all things big data. I’m kind […]
Big news this month! First and foremost, Confluent Platform 3.2.0 with Apache Kafka® 0.10.2.0 was released! Read about the new features, check out all 200 bug fixes and performance improvements […]