Level Up Your Kafka Skills in Just 5 Days | Join Season of Streaming On-Demand
Apache Kafka is the standard for event-driven applications. But it’s not without its challenges, and the ops burden can be heavy. Organizations that successfully build and run their own Kafka environment must make significant investments in engineering and operations to account for failover and security. And they spend hundreds of thousands of dollars on capacity that is often idle but necessary to handle unexpected bursts or spikes in their data. These are the typical reasons for running in the cloud.
There is fantastic news if you’re just coming up to speed on Kafka: we are eliminating these challenges and lowering the entry barrier for Kafka by making Kafka serverless and offering Confluent Cloud for free*.
As we are announcing today at Kafka Summit San Francisco, you can get started with Confluent Cloud on the cloud of your choice for free. Sign up on the Confluent Cloud landing page, and we’ll give you up to $50 USD off each month for the first three months. We’ve simplified pricing, so $50 goes a long way, and you can easily calculate what you would pay beyond that if you go over.
Confluent Cloud provides a serverless experience for Apache Kafka on your cloud of choice, including Google Cloud Platform (GCP), Microsoft Azure, and Amazon Web Services (AWS). Kafka made serverless means:
Kafka made serverless means you never need to worry about configuring or managing clusters—all you have to think about is the problem you are trying to solve. In other words, that means you focus on building apps, not infrastructure.
One of the biggest challenges in building a Kafka environment can be sizing for the future, including unexpected peaks or bursts. If you do not pre-provision this capacity, or expand the cluster fast enough to accommodate an increase or spike in traffic, you run the risk of downtime or data loss.
With Kafka made serverless in Confluent Cloud, you get true elastic scaling without having to change so much as a single configuration parameter in your account. Your environment grows seamlessly from development to testing to production loads without you spending time manually sizing the cluster for peaks or future expansions.
With consumption-based pricing, you only pay for what you use now (data in, data out, and data retained), and with elastic scaling, your environment is scale proofed. Don’t pay for capacity you don’t need today or idle capacity that you don’t use most of the time. We have always said this is what the cloud is supposed to do, and now in the case of Kafka, it does.
There are three dimensions to Confluent Cloud’s consumption-based pricing:
There is no charge for fixed infrastructure components like brokers or ZooKeeper, so the consumption-based charges scale to zero when your usage does. You never see a minimum charge, and you don’t have to make any commitments beyond the next message you produce or consume.
If you were to stream 1 GB of data in, retained that GB and did nothing else, it would cost you exactly $0.11 for data in plus $0.10 for storage with 3x replication, for a total of $0.41 on your bill that month. As an example development use case, let’s say you streamed in 50 GB of data, stored all of it, and had two consumers, so you streamed out 100 GB. That translates to $31.50 for the month.
Another concern you might have as you’re building out a system with Kafka is downtime resulting from running out of file descriptors, connection and authentication storms, running out of disk space, bad topic/broker configurations, or any other administrative headache you might think of. With Confluent Cloud, these become things of the past. Our service-level agreement (SLA) guarantees an uptime of 99.95%. All you have to do is write applications.
You can also leverage the Confluent Cloud community for support, or add on a Confluent Cloud support plan based on the scope of your project. With three tiers in addition to free community support, it’s easy to find the right level you need. Developer support starts at just $29 per month, so you can leverage world-class support even at the earliest stages of development or production.
When it comes to Confluent Cloud, you don’t have to make all-or-nothing decisions. As you prove the value of Kafka with your first few event streaming applications built on Confluent Cloud, adoption can be incremental. As the value becomes apparent throughout your organization, Kafka adoption will grow, and your environment will scale effortlessly.
Now that you can leverage all the benefits of Kafka and serverless, get started today on any cloud of your choice. Confluent Cloud is now available on Microsoft Azure in addition to Google Cloud Platform (GCP) and Amazon Web Services (AWS).
You can start with simple consumption-based billing (charged monthly), add the appropriate tier of support when you need it, then make an annual commitment for discounts as your event streaming needs mature. When you get to a steady state for your event streams and know what commitment makes sense, you can take advantage of significant discounts.
With Confluent Cloud Enterprise, we can help you create a custom setup if the needs of your mission-critical apps are no longer met by the standard setup, for example, if you are bursting to more than 100 MBps or need a more sophisticated networking architecture such as VPC peering or a transit gateway.
Ultimately, your event streaming platform will become the central nervous system of your business, powering applications and transporting all your enterprise events. I encourage you to try the serverless experience of Confluent Cloud today—for free*—and take the first step of the journey.
*With try free promotion for Confluent Cloud, receive up to $50 USD off your bill each calendar month for the first three months. New signups only.
This blog announces the general availability of Confluent Platform 7.8 and its latest key features: Confluent Platform for Apache Flink® (GA), mTLS Identity for RBAC Authorization, and more.
We covered so much at Current 2024, from the 138 breakout sessions, lightning talks, and meetups on the expo floor to what happened on the main stage. If you heard any snippets or saw quotes from the Day 2 keynote, then you already know what I told the room: We are all data streaming engineers now.