Level Up Your Kafka Skills in Just 5 Days | Join Season of Streaming On-Demand

Join the Excitement at Current 2023: Unmissable Keynotes and 5 Must-Attend Sessions

Written By

Today, use of data streaming technologies has become table stakes for businesses

But with data streaming technologies, patterns, and best practices continuing to mature, it’s imperative for businesses to stay on top of what’s new and next in the world of data streaming.

This requires a platform that facilitates networking with the broader data streaming community—which includes companies implementing data streaming, technology vendors, open source contributors, researchers, and many others—that devote their time and energy to this development paradigm. And this is exactly where a data streaming industry event like Current 2023: The Next Generation of Kafka Summit can provide value.

Current 2023 will offer two full days of content (read: 100+ sessions) and networking opportunities—all while providing innumerable opportunities for deep dives into architectures, technologies, use cases, and trends. 

And with Current 2023 less than a month away, I am excited about meeting with data streaming enthusiasts, learning about the challenges and latest advancements in the space (both in terms of technology and products), and drawing inspiration from the myriad cool use cases that businesses are currently driving with data streaming.

But conferences can be overwhelming. The key to get the most out of a conference is to pick your sessions carefully. As you set aside time to review the conference agenda and determine which speakers really spark your interest, here’s a breakdown of what to expect from Current 2023 keynotes and my top five session picks—that you absolutely shouldn’t miss.

What to expect from Current 2023 keynotes

To set the tone of the event, we will kick off Day 1 with a keynote focused on the evolution and impact of data streaming platforms. You’ll hear from data streaming pros—including our very own Jay Kreps, co-founder and CEO at Confluent, Shaun Clowes, chief product officer at Confluent, and Joe Foster, cloud computing program manager at NASA’s Goddard Space Flight Center.

We’ll also showcase great customer stories, highlight some exciting demos including how data streaming can help you get the most out of your generative AI applications, and bring you up to speed on latest developments and product launches within Confluent.

Day 2 keynote will put the broader data streaming community in the limelight, where developers outside of Confluent will showcase exciting use cases they’re currently implementing, highlight all things open source Kafka and Flink, and much more.

Why miss out? Register today with discount code PRMBLG23 to get 25% off registration prices.

Top 5 Sessions to Check Out

1. Off-Label Data Mesh: A Prescription for Healthier Data 

by Confluent Staff Technologist Adam Bellemare

Tues., Sept. 26 4-4:45pm PDT

Data mesh, a popular socio-technical approach for designing modern data architectures, allows for sharing, accessing, and managing analytical data in complex and large-scale environments within or across organizations. But did you know principles of data mesh can also be applied to operational data in many instances? It turns out we see a lot of customers use data products, federated governance, and domain-driven design to improve their operational data plane as well.

However, one thing to keep in mind is that there’s no technology silver bullet when it comes to getting data mesh right. It’s more about instituting organizational and process changes that can help companies effectively implement data mesh.  

In this session, Adam will highlight:

  • The key social and technical hurdles on the path to implementing your own data mesh

  • The resounding successes he has seen from applying off-label data mesh to both analytical and operational domains

  • The common areas of success seen across numerous clients and customers 

  • A set of practical guidelines for implementing your own minimally viable data mesh

Why attend?

If you are already implementing data mesh within your organization, this session will help you understand how data mesh principles can be equally applicable to your operational data. And if you’re new to data mesh, you will walk away with an understanding of how it can help your organization along with practical examples of how implementing data mesh can help free technology practitioners from managing infrastructure—so they can focus on using their data to solve business problems.

2. Need for Speed: Machine Learning in the Era of Real-Time

by Oli Makhasoeva, director of Developer Relations and Operations at Bytewax

Tues., Sept. 26, 1:30-2:15 pm PDT

The growing demand for quick decision-making means more demand for implementing real-time machine learning (RTML).

Today, users want their data now (read: low latency), want their data to be fresh (read: dynamic data adaptability), and they also want it to be cheap (read: utilize resources efficiently). This means ML techniques are constantly being developed and refined to meet these user demands.

Oli will share:

  • The evolution of solutions to these challenges

  • How ML systems are advancing toward online inference and continual learning

  • The latest RTML techniques and key considerations when designing and implementing real-time machine-learning solutions

Why attend?

This session will describe generalizable patterns for enabling ML with real-time data. Artificial intelligence usage is growing fast as it offers competitive advantages for nearly any business.  These tools work best when contextualized with the most relevant data, often delivered in motion with data streaming. Come hear some insights that may help your business in its AI journey.

3. Flink SQL: The Challenges to Build a Streaming SQL Engine 

by Alibaba’s Jingsong Li 

Tues., Sept. 26, 5-5:45 pm PDT

Flink SQL is a powerful tool for stream processing that allows users to write SQL queries over streaming data. But building a streaming SQL engine is easier said than done. Jingsong’s session will explore how Flink SQL can resolve common challenges encountered when building a modern streaming SQL engine such as:

  • Handling late arrival data and guaranteeing result correctness

  • How to ingest change data from databases in real-time and apply complex operations on the change events

  • How to effectively process infinite datasets with limited storage without losing the correctness of results

Why attend?

This session is a good way to get introduced to Flink SQL, a powerful abstraction for running queries on streaming (and batch) datasets. It will showcase real-world examples of using Flink SQL to solve common stream processing problems.

If you’re already using Flink, this session will serve as a refresher about the powerful capabilities Flink SQL can bring to your business with an astonishingly fast time to market.

4. Deeply Declarative Data Pipelines 

by LinkedIn’s Senior Staff Software Engineer Ryanne Dolan

Weds., Sept. 27, 10:30-11:15 am PDT

With Flink and Kubernetes, it's possible to deploy stream processing jobs with just SQL and YAML. While this low-code approach can certainly save a lot of development time, there’s more to data pipelines than just streaming SQL. 

In this session, Ryanne will:

  • Explore just how "declarative" we can make streaming data pipelines on Kubernetes

  • Demonstrate how we can go deeper by adding more and more operators to the stack

Why attend?

I am a big believer in declarative formalisms: frameworks that specify what needs to be done and not how to do it. Declarative specifications tend to require less maintenance, are more portable because they aren’t coupled to specific implementation patterns, and are generally more accessible to data experts. 

For those who haven’t done ETL before, this session will be a good introduction to the benefits of data pipelines. And for those that have, come learn about the design choices made by a strong team promoting more effective modern data flow.

5. Robinhood’s Kafkaproxy: Decoupling Kafka Consumer Logic from Application Business Logic

by Software Engineers Mun Yong Jang and Tony Chen

 Weds., Sept. 27, 11:30-12:15 pm PDT

Apache Kafka is Robinhood’s most mission-critical infrastructure. But as Robinhood grows, it has become challenging for infrastructure engineers to manage the different requirements that the multitude of application teams have for their producers and consumers. 

In this session, you’ll learn from Mun and Tony how Robinhood solved this problem by developing a consumer proxy that manages the following concerns:

  • Kafka consumer logic

  • Resource utilization in each Kafka consumer

  • Kafka consumer failures

Why attend?

This Kafka proxy is analogous to what a service mesh does for microservices by decoupling infrastructure management from the application logic of each microservice, promoting consistent management across the whole enterprise. You don’t want your developers redundantly solving the same problems with different solutions. This talk will inform an approach to standardizing infrastructure concerns as you scale and allow developers to focus on business logic. 

Wait, there’s more!

If a packed agenda of top-notch sessions and networking with your peers in person isn't enough to convince you to attend Current 2023, you’ll find even more on the schedule this year to really round out everyone’s Current 2023 experience:

  • Meetup Hub: Join members of the data streaming community in the Expo Hall where we will be hosting informal discussions on interesting topics, like Python libraries for stream processing, using Flink with Kafka, architectures for real-time analytics, and much more. Keep checking the agenda for the latest updates on this.

  • Training and certifications: We have an exciting lineup of free training (Fundamentals of Apache Kafka®) and hands-on labs (Cluster Linking on Confluent Cloud and Schema Linking on Confluent Cloud) and two technical certifications ($75 each) (Confluent Certified Developer for Apache Kafka® and Confluent Certified Administrator for Apache Kafka®) designed to help you expand your technical expertise.

  • Birds-of-a-Feather lunches: We’re offering unique networking opportunities designed to connect you with like-minded peers and experts. At our Financial Services Lunch on Sept. 27, for example, you’ll hear from finserv leaders on how data streaming is fueling the next wave of emerging tech in the finance and banking industry—and how you can help.

  • AWS Gameday: A gamified workshop where you will build an event-driven data streaming pipeline to detect cheating, identify toxic chat, ban players, and update matchmaking ranking in real time. Participants will get hands-on experience with services such as AWS Lambda, Amazon GameLift, DynamoDB, and Confluent Cloud—plus, plenty of chances to win swag!

  • Current 2023 Party: Join us Tuesday evening (September 26) to unwind after a day of immersive learning. Head over to San Pedro Square Market to savor beverages and refreshments, enjoy live music, and continue networking with your fellow attendees from the data streaming community.

  • Diversity & Inclusion Programming: Get ready for our Women in Tech Lunch on Sept. 26—an empowering session filled with knowledge-sharing and scope for building meaningful connections with fellow women in the space. Plus, join the fireside chat on The Power of Storytelling: From Startup to Transformation with Ty Spells, and an opportunity to volunteer with Destination Home and learn how they are working to end homelessness in Silicon Valley. 

  • Streaming Pass: While we look forward to seeing you at Current 2023, we have also arranged for a free Streaming Pass for those who are unable to join us in person this year. The Streaming Pass will give you virtual access to our keynotes and selected breakout sessions.

Don’t wait! Join us September 26-27 in San Jose, California to connect with thought leaders, movers and shakers, and star practitioners in data streaming in real time. 

Register today with discount code PRMBLG23 to get 25% off registration prices.

  • Andrew Sellers leads Confluent’s Technology Strategy Group, a team supporting strategy development, competitive analysis, and thought leadership.

Did you like this blog post? Share it now