Ahorra un 25 % (o incluso más) en tus costes de Kafka | Acepta el reto del ahorro con Kafka de Confluent
If you looked at the Kafka Summits I’ve been a part of as a sequence of immutable events (and they are, unless you know something about time I don’t), it would look like this: New York City 2017, San Francisco 2017, London 2018, San Francisco 2018, New York City 2019, London 2019, San Francisco 2019. That makes this the seventh Summit I’ve attended.
Yes, you read that right. I’ve officially been to seven Kafka Summits in my career. I started going well before I worked for Confluent. The passionate community, fascinating use cases, and high-quality sessions kept me coming back year after year. It’s grown so much since the early days, but Summit still manages to stay true to the committers and developers who make everything we do at Confluent possible.
With day 1 in the books, let’s talk about day 2.
Did I mention we went to Build-A-Bear yesterday with the 2020 class of Confluent Community Catalysts? Well, I wanted to share the experience with everyone present at Summit and not just blog readers, so I brought my stuffed bear (I named him “Bear”) onto the stage to get things kicked off. I think he was well received.
Then Confluent’s somewhat more self-respecting CEO, Jay Kreps, opened the morning with a slight remix of Marc Andreeson’s famous truism that software is eating the world. It is not just that software is eating the world, but that companies themselves are becoming software; that is, not only are businesses using more software—that by itself is a somewhat vacuous observation—but more business processes are being executed end to end by software, asynchronously, without the need for human interaction.
We tend to think of software as a support system for a user interface, even as that interface has changed over the past 50 years from teletype to terminal to GUI to the web to mobile apps. In all cases, a person does a thing to the interface and waits synchronously for a response. Databases have grown up as the optimal tools for managing the data of systems like these, but the more pressing asynchronous needs of the emerging class of applications calls for something different. If you want to think more about this, then I strongly recommend watching Jay’s keynote.
Reached for comment, Bear described the streaming database concept as “an exciting development in data infrastructure purpose-built for today’s asychronous back ends.” I’ve only known him for a day, but I’ve already come to know this sort of perspicacity to be downright typical for him.
Then Lyft’s Engineering Manager of Streaming Platforms, Dev Tagare, sat down with Jay to talk about how Lyft uses Apache Kafka®. One memorable example from their conversation: the little moving car you see when you’re waiting for your ride? Those movement events flow through Kafka in real time. I find this kind of thing very helpful when I’m explaining to non-tech people what I do. Everybody uses Kafka all day every day. You can’t avoid it, even if you don’t know it’s happening.
Next up, my esteemed co-worker Priya Shivakumar took the stage to talk about Confluent Cloud. She has had a pivotal role in shaping the product, so it was good to hear from her where it’s going and why.
Like she explained in her blog post yesterday, Confluent Cloud presents us with Kafka made serverless—no brokers, no scaling of clusters—and $50 off your bill each month for the first three months after you sign up. Barriers to cloud Kafka usage, be gone.
Of course I’ve said very little about the sessions themselves, which are the backbone of this event. I’ll summarize those for you when the videos come out in a few weeks. And after that, I’ll be telling you about the 2020 London Summit before you know it.
Join Confluent at AWS re:Invent 2024 to learn how to stream, connect, process, and govern data, unlocking its full potential. Explore innovations like GenAI use cases, Apache Iceberg, and seamless integration with AWS services. Visit our booth for demos, sessions, and more.