Extensive out-of-the-box functionality, a large user community, and up-to-date, cloud-native features make Spring and its libraries a strong option for anchoring your Apache Kafka® and Confluent Cloud based microservices architecture. Spring takes care of boilerplate system responsibilities—letting you focus on your business logic—while serverless Apache Kafka on Confluent Cloud provides your transport and storage layers.
For an entire course on using Spring and its libraries with the Apache Kafka ecosystem, make sure to visit Confluent Developer.
Spring’s opinionated approach can significantly reduce your development time and allow you to easily collaborate with other developers on your team. Moreover, its support for multiple profiles means that you are able to provide different configuration parameters based on your environment (for example, development vs. QA). Adding the Spring for Apache Kafka project to your Spring implementation provides Kafka-specific capabilities, and its features are modeled on common Spring patterns, so you are likely to find them familiar.
The KafkaTemplate class in the Spring for Apache Kafka project was designed to be similar to JMSTemplate. It’s relatively simple to use, particularly if you are already familiar with Kafka producers—which it wraps. To proceed, you create a ProducerFactory bean, providing a configuration map, then use that to make a KafkaTemplate, which ultimately provides you with numerous convenience methods. (Note that Kafka is asynchronous by default, so Spring provides a facility for handling synchronicity.)
In Spring, the components that communicate with messaging systems have access to message-driven POJO functionality, in which regular POJOs can serve as asynchronous message listeners. Thus, consuming Kafka messages in Spring is accomplished by simply annotating a bean with KafkaListener, which will cause the Spring framework to instantiate a MessageListenerContainer that will take care of parallelization, configuration, retries, offsets, and other elements needed by your Kafka application. Offloading this work allows you to focus on your primary logic.
Manual topic creation is straightforward in Confluent Cloud, but you may wish to create topics programmatically, for example, if you are working with regular expressions in topic names or creating a large number of topics. You can accomplish this in Spring with the TopicBuilder class in conjunction with the KafkaAdmin class, which wraps Kafka’s Admin API. TopicBuilder methods allow you to provide standard configuration parameters relating to, for example, the number of partitions and number of replicas—but also relating to some parameters that aren’t available by API—such as one covering the compression type used in the topic.
Adding an EnableKafkaStreams annotation, some configuration parameters, as well as a StreamsBuilder to your Spring code will let you access the Kafka Streams APIs. Spring’s wrapper over Kafka Streams is quite thin and Spring handles lifecycles, which lets you primarily focus on business logic (as you have probably come to expect by now after learning about the other Spring features). Additionally, Spring for Apache Kafka wraps JMX metrics from Kafka Streams and makes them available through the Micrometer framework.
When you create a Confluent Cloud cluster, you have the option to create a Schema Registry. This establishes it on Confluent Cloud, but to connect to it from your Spring Boot application, you’ll need its URL as well as some credentials. Your options for formats are Avro, JSON Schema, and Protobuf, and you can use multiple schemas in one application (Confluent provides SerDes for all three).
Leveraging Spring can enable you to quickly start developing sophisticated Apache Kafka based systems. Although Spring is opinionated by default (which, as mentioned above, saves time and simplifies communication among developers), keep in mind that it does provide the ability to extend or customize certain options, should you need something not in the out-of-the-box version.
To learn more about combining Spring and Apache Kafka:
Adding queue support to Kafka opens up a world of new possibilities for users, making Kafka even more versatile. By combining the strengths of traditional queue systems with Kafka’s robust log-based architecture, customers now have a solution that can handle both streaming and queue processing.
Confluent launches the general availability of a new JavaScript client for Apache Kafka®, a fully supported Kafka client for JavaScript and TypeScript programmers in Node.js environments.