[Webinar] How to Protect Sensitive Data with CSFLE | Register Today
This year, we were pleased to host the inaugural Kafka Summit, the first global summit for the Apache Kafka community. Kafka Summit 2016 contributed valuable content to help Kafka users share their experiences using the technology as well as for the community to learn from creators of several other stream processing systems that integrate with Kafka. To see some of the top sessions, check out this curated list.
Now, we have some big news to share.
For next year, I’m very excited to announce two Kafka Summits – in New York City and San Francisco – to further support the growing Apache Kafka community and share the latest and greatest in stream processing. The conferences will bring together stream processing experts, open source innovators and Kafka enthusiasts to discuss the future of streaming data as well as real-world examples of companies that are successfully getting business value from their streaming data pipelines. Many of my fellow Apache Kafka committers will join me in attendance to answer questions and discuss the road ahead for Kafka.
Here are the dates and locations – mark your calendars!
Kafka Summit New York City
Location: Hilton Midtown Manhattan
Tutorial & Hackathon: Sunday, May 7, 2017
Conference: Monday, May 8, 2017
Training: Tuesday – Thursday, May 9-11, 2017
Kafka Summit San Francisco
Location: Hilton Union Square San Francisco
Tutorial & Hackathon: Sunday, August 27, 2017
Conference: Monday, August 28, 2017
Training: Tuesday – Thursday, August 29-31, 2017
The call for papers and sponsorship information are available for both events. Get involved!
We look forward to seeing you in SF or NYC next year.
We are proud to announce the release of Apache Kafka 3.9.0. This is a major release, the final one in the 3.x line. This will also be the final major release to feature the deprecated Apache ZooKeeper® mode. Starting in 4.0 and later, Kafka will always run without ZooKeeper.
In this third installment of a blog series examining Kafka Producer and Consumer Internals, we switch our attention to Kafka consumer clients, examining how consumers interact with brokers, coordinate their partitions, and send requests to read data from Kafka topics.