Level Up Your Kafka Skills in Just 5 Days | Join Season of Streaming On-Demand
The data coming into Kafka is fresh and hot. And you can deliver a new level of operational visibility and intelligence fueling applications with it. But streaming data is no longer real-time when the sink is batch. So the challenge is processing it and analyzing it at scale and extracting those insights - before they go stale.
So what’s the right architecture? Should you ingest streams into a data warehouse or data lake? Maybe use a stream processor or a database? Engineering teams love using Apache Flink, but they also love using Apache Druid, a popular real-time analytics database used by 1000s of companies like Confluent and Netflix. Do you need Flink and Druid? When does it make sense vs when does it not?
Join this session to learn about Apache Druid and why companies use it in combination with Kafka and Flink for real-time applications. Learn how Apache Druid complements Flink and Kafka - and what makes it purpose-built for analyzing streams and events. This talk shows real-world examples from companies that use Apache Druid with Kafka and Flink in production today and the best-practices that every dev can take advantage of.