わずか5日間で Kafka スキルをレベルアップ | ストリーミングシーズンに参加

Data Streaming Platforms, Gen AI, and Apache Flink® Reigned Supreme at Kafka Summit London

作成者 :

As presenters took the keynote stage at this year’s Kafka Summit in London, an undercurrent of excitement was “streaming” through the room. With over 3,500 people in attendance, both in person and online, the Apache Kafka® community came out in force to celebrate Kafka milestones, connect with industry experts and peers, and learn about the latest news and developments in the world of data streaming.

As Confluent’s Chief Product Officer, I was thrilled to open the Summit by thanking some of the leaders in our Kafka community for their remarkable contributions. Let’s not forget that, just recently, we passed the 1000th Kafka Improvement Proposal (KIP) for the Apache Kafka project — what an incredible journey! And it’s not slowing down. As Confluent CEO and Co-founder Jay Kreps said during his presentation, “Data streams are the abstraction that unify the operational estate and Kafka is the open standard for data streaming...It’s now the case that some of the largest scale, most efficiency-oriented systems in the world are streaming systems.”

Keep reading below for a few more highlights from the action-packed keynote:

  • Confluent Cloud for Apache Flink is now GA: We announced general availability of our fully managed Confluent Cloud for Apache Flink® service, providing customers with a true multicloud solution to seamlessly deploy stream processing workloads everywhere their data and apps reside.

  • Unveiling Tableflow: We introduced our vision for Tableflow, a new feature on the Confluent Cloud data streaming platform that allows users to convert Apache Kafka topics and associated schemas to Apache Iceberg® tables with a single click to better supply data lakes and data warehouses. Tableflow is in Early Access release now

  • Kora is faster than ever: We revealed that Kora, the engine that powers Confluent Cloud’s cloud-native Kafka service, is now 16x faster than OSS Kafka.

  • Connector portfolio enhanced: New enhancements to Confluent’s fully managed connector portfolio were unveiled, including DNS Forwarding and private Egress Access Points, and an improved 99.99% SLA. 

  • Stream Governance beefed up: Confluent’s Stream Governance offering continues to evolve, and is now enabled by default across all environments with a 99.99% uptime SLA for Schema Registry. Additionally, we announced that regional coverage for Stream Governance will expand to all Confluent Cloud regions, providing more seamless access to key governance features.

  • Twinlabs.ai wins big: The winners of Confluent’s $1M Data Streaming Startup Challenge were announced, and Twinlabs.ai took home the big prize for their AI-powered digital twin platform that creates 3D mirrors of physical objects, systems, or venues (like conference centers), all in real time.

You can watch the full keynote presentation here.

Two Full Days of Non-Stop Learning

There was something for everyone at this year’s Kafka Summit, ranging from short lightning talks designed to whet your appetite for topics like developing efficient Flink apps or exploring the intricacies of large language models, to longer sessions that covered Apache Flink® fundamentals or challenged participants to build a data streaming pipeline in real time. 

Take a peek at some of the presentation highlights below, and stay tuned for our “Best of Kafka Summit” session roundup coming to the blog soon.

Kafka Summit Day One Session Highlights

Code-First Approach: Crafting Efficient Flink Apps

DeVaris Brown, CEO at Meroxa kept things moving on Day One with a 10-minute lightning talk on how to build efficient apps with Flink. While a more conventional, SQL-based approach to working with Flink can limit its potential, Brown proposed that the adoption of a code-first approach will give developers greater control over application logic, help facilitate complex transformations, and enable more efficient handling of state and time — all while improving the overall quality of your Flink applications.

Eager for more Flink? Check out this teaser for the next issue of our comic book The Data Streaming Revolution. The adventure continues as our heroes are tasked with figuring out a way to do real-time stream processing that’s efficient, manageable, and cost effective. 

The Game is On! Customer360 Pipeline With Confluent – An AWS GameDay

We love a mini-game! At this fun, hands-on workshop hosted by Confluent and AWS, attendees embarked on a quest where they were asked to build their very own event-driven, data streaming pipeline to identify upsell opportunities in real time, mitigate churn, categorize customer loyalty, and notify customers of delivery status. Participants gathered with their laptops and spent a fun couple of hours puzzling through clues to earn points (and prizes!) while building their pipelines.

The Definitive Guide to Flink’s Checkpointing

This beginner-friendly session on Flink’s checkpointing feature was a fan favorite on Day One. David Moravek, Staff Software Engineer at Confluent and Co-founder of Immerok dug into the basics of how checkpointing works in Flink and how you can use it to make your data streaming applications more reliable. Attendees left with a wealth of ready-to-apply knowledge on how to set up checkpoints, keep systems running smoothly, and optimize performance.

Kafka Summit Day Two Session Highlights

How Do You Query a Stream?

So you’ve fully embraced Apache Kafka as the core of your data infrastructure and now benefit from event-driven services that respond to the world in (more or less) real time. It’s safe to say you’ve moved on from your monolithic past and things are looking good. But there’s just one problem: now that everything’s a stream, how do you query things? Tim Berglund, VP of Developer Relations at StarTree, explored this topic in an incredibly popular session where he outlined the many solutions available for querying and how to make the right choice to suit your needs. “I will not present a correct answer today, because there isn’t one,” Berglund said. “This is engineering. There are no solutions; there are only trade-offs,” he stated, before walking the audience through a variety of available options.

Explaining How Real-Time GenAI Works in a Noisy Pub

Who doesn’t want to learn about the inner workings of large language models (LLMs) over a nice, cold pint? In the time it takes to enjoy a round of drinks, Simon Aubury, Associate Director of Data Platforms at Simple Machines, covered the history of LLMs (what they are, how they’re built, and how they’re trained on enormous volumes of data. “We’re seeing this merging of generative AI solutions, which are maturing very quickly, with a wealth of up-to-date, rich, useful information. And I think we’re going to see abstractions come together and see some great results. This is something I think we can all raise a drink to,” Aubury said. Now that’s refreshing!

Ensuring Kafka Service Resilience: A Dive Into Health-Checking Techniques

Emma Humber, Staff Software Engineer at Confluent, launched into a quick talk on Day Two about health-checking, a powerful tool that encompasses an application and its monitoring to validate Kafka environment availability. In a mere 10 minutes, she covered Kafka health-check methods and best practices, how to identify common pitfalls, and how to monitor critical performance metrics for early issue detection. By the end, we were all feeling more resilient!

It’s Over Already! But Stay Tuned for More at Kafka Summit Bangalore and Current 2024

We hate to say goodbye, so let’s just go with, “Until next time!” because our next data streaming events are right around the corner. We hope to see you at the inaugural Kafka Summit Bangalore in May and Current 2024 in September, where you can look forward to more action-packed days of sessions presented by data streaming experts on every topic imaginable.

For now, you can…

  • Watch the keynote on demand here.

  • Watch recordings of the KSL breakout sessions the week of March 25 here.

  • Shaun Clowes は Confluent の最高製品責任者です。

  • Erin Junio is a Senior Content Marketing Manager at Confluent. She specializes in telling customer stories and distilling technical subject matter into compelling, relatable narratives.

このブログ記事は気に入りましたか?今すぐ共有