Ahorra un 25 % (o incluso más) en tus costes de Kafka | Acepta el reto del ahorro con Kafka de Confluent
For those of you who are hungry for more stream processing, we are pleased to share the recent release of Confluent’s Stream Processing Cookbook, which features short and tasteful KSQL recipes that help you solve specific, domain-focused problems using KSQL.
Organized according to various use cases (e.g., partitioning, streaming ETL, anomaly detection, data wrangling), KSQL recipes provide easy directions that you can follow to begin—or continue!—putting KSQL to use.
KSQL is the streaming SQL engine for Apache Kafka®, and is complementary to the Kafka Streams API. As KSQL and Kafka Streams continue growing in their adoption and support within the stream processing world and Kafka ecosystem, in parallel with increasing investments in evolving streaming applications, these recipes benefit both developers and the business alike by providing pre-baked KSQL techniques that involve little modification, regardless of whether or not you have a Java background.
KSQL recipes are designed to help people build event-driven, real-time systems. They are submitted by Confluent specialists and community members, and therefore serve as a place to collaborate, share and inspire one another with KSQL applications, broadening the approach of streaming application development.
KSQL recipes serve three main purposes:
The recipes show you how to use KSQL in different ways. Although you have the option to read and follow the recipes as they are, you can also think about how to use the same code patterns but apply them to other use cases.
As an example, a very commonly used KSQL recipe is Processing Syslog Data. This general code pattern can apply to a variety of use cases, including fraud detection, network traffic, signal anomaly or propensity analysis. All of these use cases can leverage KSQL, as they use a similar code base despite deviating based on the specified implementation of the recipe.
In addition, building, testing and modifying applications becomes faster with the recipes. Because they help with getting started, you can quickly build in customizations and specifics, run the application, examine the output and iterate, all without the help of a Kafka operator.
This may seem like a marginal advantage, but with the number of and pace at which streaming applications and transformations are being built, all of these cycles are significant. The need to have a Kafka administrator results in a potential bottleneck, which slows the entire development experience intended to move toward agility and defeats the purpose of KSQL.
The Stream Processing Cookbook is now available for you to peruse. If you’re interested, you can browse the KSQL recipes for handy tutorials and examples to follow.
Let’s get cooking!
Tableflow can seamlessly make your Kafka operational data available to your AWS analytics ecosystem with minimal effort, leveraging the capabilities of Confluent Tableflow and Amazon SageMaker Lakehouse.
Building a headless data architecture requires us to identify the work we’re already doing deep inside our data analytics plane, and shift it to the left. Learn the specifics in this blog.