[Webinar] Build Your GenAI Stack with Confluent and AWS | Register Now
Over time, deploying and running Kafka became easier and easier. Today you can choose amongst a large ecosystem of different managed offerings or just deploy to Kubernetes directly. But, although you have plenty of options to optimize your Kafka configuration and choose infrastructure that matches your use case and budget, it’s not always easy to tell how these choices affect overall cluster performance.
In this session, we’ll take a look at Kafka performance from an infrastructure perspective. How does your choice of storage, compute, and networking affect cluster throughput? How can you optimize for low cost or fast recovery? When is it better to scale up rather than to scale out brokers?
You’ll walk away from this session with a mental model that allows you to better understand the limits of your clusters. You can use this knowledge to make informed decisions on how to achieve the throughput, availability, and durability required for your use cases while optimizing infrastructure cost.