Ahorra un 25 % (o incluso más) en tus costes de Kafka | Acepta el reto del ahorro con Kafka de Confluent

Integrating Microservices with Confluent Cloud Using Micronaut® Framework

Escrito por

Designing microservices using an event-driven approach has several benefits, including improved scalability, easier maintenance, clear separation of concerns, system resilience, and cost savings. With Apache Kafka® as an event plane, services now have a durable, scalable, and reliable source of event data. From Kafka topics, a microservice can easily rebuild and restore the state of the data used to serve end users.

Microservice architects searching for a JVM framework for developers may want to explore Micronaut. This framework embraces event-driven architecture. This article briefly introduces you to Micronaut and its benefits. Then we’ll dive into the details of integrating your microservices with Apache Kafka on Confluent Cloud. Let’s get into it.

What is Micronaut?

Micronaut is an open source JVM-based framework for building lightweight microservices. At its core, it is designed to avoid reflection to improve application startup times, recomputing injected dependencies at compile time rather than runtime. Per the documentation, Micronaut supports best-practice patterns for building JVM applications, such as dependency injection and inversion of control, aspect-oriented programming, sensible defaults, and auto-configuration.

Integrating with Apache Kafka

We'll cover two use cases to illustrate Micronaut’s integration with Kafka. Both use cases apply a “listen to yourself” pattern, with a REST controller sending commands to alter data to Kafka. From there, a listener processes the Kafka events and updates the underlying data model. Query requests call a data source directly via JPA.

There are two examples because we want to highlight data serialization in Micronaut. We’ll begin by letting Micronaut use its sensible defaults to infer how the key and value should be serialized. Then, we’ll pivot to using the Stream Governance capabilities of Confluent Cloud to manage Apache Avro™ schemas for our structured data.

The Foundation

With the Micronaut framework, we can also build message and event-driven applications. And, yes, that includes Apache Kafka—as well as RabbitMQ®, JMS, and MQTT. But we’re here for Kafka, so let’s get into that. For starters, add the micronaut-kafka dependency to the application build. Here it is in Gradle:

implementation("io.micronaut.kafka:micronaut-kafka")

Next, we should explore how to configure a Micronaut application to use Apache Kafka. We prefer using YAML, TOML, or Hocon—instead of a properties file—for the sake of legibility. To use YAML, you need to add snakeyaml as a runtime dependency to your build:

runtimeOnly("org.yaml:snakeyaml")

Now that we have that in place, we can start to configure our connection to Apache Kafka—in our case on Confluent Cloud. One of the key features about Micronaut is the use of sensible defaults in application configuration. The examples will inject those values from environment variables to protect our Confluent Cloud connection parameters and credentials. Here’s the basis of our Confluent Cloud connection:

kafka:
 bootstrap.servers: ${CC_BROKER}
 security.protocol: SASL_SSL
 sasl.jaas.config: "org.apache.kafka.common.spocurity.plain.PlainLoginModule required username='${KAFKA_KEY_ID}' password='${KAFKA_KEY_SECRET}';"
 sasl.mechanism: PLAIN
 client.dns.lookup: use_all_dns_ips
 acks: all

Example 1: Using Micronaut’s defaults

Our first use case will adhere to the Micronaut defaults as closely as possible. In this scenario, a REST controller sends events about product price changes to a Kafka topic. A listener to that topic updates the underlying data model, and subsequent queries use the controller's GET endpoints to retrieve the Product entities from the data store.

Serialization

A primary concern in event streaming is data serialization. This is vital to any distributed data contract—event data needs a known structure, full stop. And there are multiple ways to achieve this. Given the nature of the listen-to-yourself pattern, the events published are intended to be consumed by our application. This won’t always be the case; we’ll elaborate on this later. For now, let’s use this pattern to highlight the default serialization methodology used by Micronaut.

When serializing data to Kafka, Micronaut takes an “educated attempt” at inferring the Serializer implementation to use given the data type of the key and value. Our initial pass at producing data for the product-price-changes topic is to serialize the key as a String, and the value is a ProductPriceChangedEvent record. Looking at the ProductPriceChangedEvent record, we see the @Serdeable annotation:

@Serdeable
public record ProductPriceChangedEvent(String productCode, BigDecimal price) {
}

Applying the @Serdeable annotation indicates that this ProductPriceChangedEvent class is to be serialized and deserialized as JSON. Using the DefaultSerdeRegistry class makes these selections at compile time via introspection. This directly contrasts with frameworks like Spring, which make these decisions at runtime, a much less efficient process.

Producing events to Kafka

Sending events to Kafka starts by decorating an interface with the @KafkaClient annotation.

@KafkaClient // (1)
public interface ProductPriceChangesClient {

   @Topic("product-price-changes") // (2)
   void send(@KafkaKey String productCode, ProductPriceChangedEvent event);  // (3)
}

The ProductPriceChangeClient interface is annotated as a @KafkaClient (1). This annotation provides AOP advice on how to configure a KafkaProducer instance. This example does not provide a value property and will use the default producer configuration. More on that in a bit.

To send events to a Kafka topic, we annotate a method of the interface with @Topic (2), providing the name of the topic to which events are sent. The parameters of this send() method (3) are the elements of the event to send to Kafka. If this method had only one parameter, it would be implied as the value of the resulting ProducerRecord. In this case, we have two parameters, one of them annotated with @KafkaKey (you guessed it, this will be the key of the ProducerRecord with the second parameter as the value). The @MessageHeader annotation could add parameters to this method, adding headers to the resulting ProducerRecord.

Consuming Kafka events

To consume these ProductPriceChangeEvent records from Kafka, let’s decorate a class (not an interface) as a @KafkaListener.

@KafkaListener(offsetReset = OffsetReset.EARLIEST, groupId = "demo") // (1)
public class ProductPriceChangedEventHandler {

   private static final Logger LOG = LoggerFactory.getLogger(ProductPriceChangedEventHandler.class);

   private final ProductRepository productRepository;

   ProductPriceChangedEventHandler(ProductRepository productRepository) {
       this.productRepository = productRepository;
   }

   @Topic("product-price-changes")  // (2)
   public void handle(ProductPriceChangedEvent event) {  // (3)
       LOG.info("Received product price change with code :{}", event.productCode());
       productRepository.updateProductPrice(event.productCode(), event.price());
   }
}

The @KafkaListener (1) encapsulates the configuration and creation of a KafkaConsumer. Like @KafkaClient, the absence of a value parameter means we’ll use the default consumer configuration here. However, we can include other Kafka ConsumerConfig values here, such as the auto.offset.reset, and group.id values exposed through the annotation.

The handle() method is annotated with the @Topic annotation (2), specifying the Kafka topic from which it consumes events. Here, the handle() method has one parameter (3), which is implied to be the value of the underlying ConsumerRecord. This simple example calls a method on the ProductRepository to update the price of a Product.

Details on configuration

Let’s revisit the configuration terms we breezed through in the previous section—specifically the concept of default configuration for producers and consumers.

Starting with the producer configuration, we can provide a default configuration for any @KafkaClient-annotated interfaces to use when those classes do not specify a producer configuration. This is useful when all producers in your application use the same serializers (both key and value). But it will also be the fallback for any other producer configurations for which you may specify values, overriding the defaults. The example below configures the default producer to use StringSerializer for the key and ByteArraySerializer for the value of produced events.

kafka:
.....
 producers:
   default:
      key.serializer: org.apache.kafka.common.serialization.StringSerializer
      value.serializer: org.apache.kafka.common.serialization.ByteArraySerializer

The same premise applies to configuring classes annotated with @KafkaListener to configure the defaults for Kafka consumers. You could define an application-wide default for group.id and other KafkaConsumer configuration values here.

kafka:
.....
 consumers:
   default:
	   group.id: this-micronaut-app
     key.deserializer: org.apache.kafka.common.serialization.StringDeserializer
     value.deserializer: org.apache.kafka.common.serialization.ByteArrayDeserializer

Example 2: Using stream governance

In the listen-to-yourself pattern for microservices, the serialization strategy detailed in our first example may suffice. Perhaps the product-price-changes topic is “self-contained,” meaning there are access controls that restrict write and read access to this data in our microservice. Perhaps this isn’t part of our canonical data model.

When we work with data from the canonical model—data that has meaning across the organization—we need to ensure data quality and integrity. There is no better place to do this than straight from the source, as far “left” as possible in the lifecycle of our streams. This is where stream governance comes into play, specifically building, maintaining, and evolving the schema of the data in our canonical model. We would like the entire organization to sing from the same sheet of music.

Confluent Cloud provides stream governance centered around the Confluent Schema Registry, which we can use to manage and evolve data contracts. Confluent also provides implementations of Kafka’s Serializer and Deserializer interfaces for Kafka clients that are “Schema Registry-aware.” These include industry-standard serialization libraries like Avro, Google Protobuf, and JSON with schema.

The data model

Let’s create Avro schemas for the concept of an order, which will be updated via the controller class, sending an order change event to a Kafka topic. When processed, this event updates the items in the order. Here are the schema definitions:

{
 "type": "record",
 "namespace": "io.confluent.devrel.event",
 "name": "OrderChangeEvent",
 "fields": [
   {
     "type": "long",
     "name": "orderId"
   },
   {
     "name": "eventType",
     "type": {
       "type": "enum",
       "name": "OrderChangeEventType",
       "namespace": "io.confluent.devrel.event",
       "symbols": ["ADD_ITEM", "UPDATE_ITEM_COUNT", "DELETE_ITEM"]
     }
   },
   {
     "name": "item",
     "type": "io.confluent.devrel.model.OrderItem"
   }
 ]
}

Notice there are two record types in this schema: an enum called OrderChangeEventType (defined inline) and a record OrderItem. Let’s define the OrderItem schema:

{
 "type": "record",
 "namespace": "io.confluent.devrel.model",
 "name": "OrderItem",
 "fields": [
   {
     "name": "orderId",
     "type": "long"
   },
   {
     "name": "productId",
     "type": "string"
   },
   {
     "name": "description",
     "type": "string"
   },
   {
     "name": "quantity",
     "type": "int"
   },
   {
     "name": "unitPrice",
     "type": "double"
   }
 ]
}

A common practice in the Java community with Avro is to generate Java bean classes, providing compile time safety and easier code to read. Since we’re using Gradle, we include the com.bakdata.avro Gradle plugin in our build:

plugins {
   id("com.bakdata.avro") version "1.0.0"
}

This gives us Gradle tasks to generate the Java bean classes by running this command:

❯ ./gradlew generateAvroJava

Now we have Java beans that implement the SpecificRecordBase interface of the Apache Avro Java SDK. 

Configure the producer

Micronaut allows multiple producer definitions in the configuration file. Given that these order changes are serialized differently than in the previous use case, the producer must be configured with the appropriate serializers. Here is the additional producer configuration:

kafka:
.....
 producers:
   default:
	   client.id: this-micronaut-app
   order-changes:
     key.serializer: org.apache.kafka.common.serialization.LongSerializer
     value.serializer: io.confluent.kafka.serializers.KafkaAvroSerializer
     schema.registry.url: ${CC_SCHEMA_REGISTRY_URL}
     basic.auth.credentials.source: USER_INFO
     basic.auth.user.info: "${SCHEMA_REGISTRY_KEY_ID}:${SCHEMA_REGISTRY_KEY_SECRET}"

Here the order-changes producer overrides the default value of the key and value serializers—LongSerializer and KafkaAvroSerializer, respectively. Since KafkaAvroSerializer is Schema Registry-aware, we provide the configuration needed to connect to Confluent Schema Registry.

We apply this configuration to the interface annotated as the @KafkaClient for our order changes as follows:

@KafkaClient(id = "order-changes")
public interface OrderChangeClient {

   @Topic("order-changes-avro")
   void sendOrderChange(@KafkaKey Long id, OrderChangeEvent event);
}

Configure a consumer

Similarly, we define the consumer configuration to correspond to the producer configuration for order changes:

kafka:
.....
 consumers:
   default:
	   group.id: this-micronaut-app-consumer
   order-changes:
     key.deserializer: org.apache.kafka.common.serialization.LongDeserializer
     value.deserializer: io.confluent.kafka.serializers.KafkaAvroDeserializer
     specific.avro.reader: true
     schema.registry.url: ${CC_SCHEMA_REGISTRY_URL}
     basic.auth.credentials.source: USER_INFO
     basic.auth.user.info: "${SCHEMA_REGISTRY_KEY_ID}:${SCHEMA_REGISTRY_KEY_SECRET}"

With this configuration in place, let’s update the @KafkaListener-annotated classes to use them:

@KafkaListener(groupId = "order-item-changes", value = "order-changes")
public class OrderChangeEventListener {}

As you can see, the annotations provide easy overrides from the configuration. For instance, if we didn’t specify a group.id in the @KafkaListener annotation, the fallback is to the value from the application.yaml file.

Running the application

Theoretically, this is all great. But let’s see it in action, with events and a data store. Here are some prerequisites to running the application:

You can start by cloning the Confluent demo-scene repository from GitHub. The examples in this article are in the micronaut-cc directory.

Provision Confluent Cloud

As helpful as the Confluent Cloud console may be, Terraform is used here to provision Confluent Cloud environments, Kafka clusters, Stream Governance, and access controls.

With the repository cloned, open a terminal and go to that location and the micronaut-cc directory. There you’ll find a terraform subdirectory. (If this is your first foray into Confluent Cloud CLI, pay particular attention to the environment variables steps in the README file.) Let’s go there and execute the following commands:

export TF_VAR_org_id=$(confluent organization list -o json | jq -c -r '.[] | select(.is_current)' | jq '.id') 		(1)
terraform init 	(2)
terraform plan -out "tfplan" (3)
terraform apply 					   (4)

The first command is vital to the process because we are authenticating to Confluent Cloud and using the CLI (1). From there, we need to preserve our Confluent Cloud organization id value for later Terraform steps to use. Looking at variables.tf, there is an org_id variable that needs to be defined. Terraform allows for adding these values as environment variables, prefixed with TF_VAR_. The confluent organization list command returns the organization(s) of which the authenticated user is a member as a JSON document. That output is piped to a series of jq commands to find the id of the current organization. (Note: If your user is a member of MULTIPLE organizations, this might not be deterministic. But for our demo purposes, this works.)

As the name implies, terraform init (2) initializes the Terraform environment by pulling in the needed providers used by our Terraform code. Next, we create a “plan” for Terraform to execute (3), comparing our Terraform code to the known state of the environment to determine the changes to be made. Finally, the “plan” is applied (4) to the environment.

After the Confluent Cloud environment changes are applied, we can use the terraform output command to extract the environment and credential information to a properties file. This protects us from accidentally committing these sensitive values to our Git repository.

terraform output -json | jq -r 'to_entries[] | .key + "=\"" + (.value.value | tostring) + "\""' | while read -r line ; do echo "$line"; done > ~/tools/micronaut-cc.properties

We can then inject these key-value pairs as environment variables used by our application at runtime.

Start a MySQL instance

This microservice uses MySQL as a data store—just to illustrate the processing of our events by consumer classes. With that in mind, you need to start a MySQL instance using Docker—here’s a command to get you there:

docker run --name micronaut-mysql -e MYSQL_ROOT_PASSWORD="micronaut" -p 3306:3306 -d mysql:8.2.0

Once the database is started, create a schema named micronaut for use in our microservice. This can be done via the command line by opening a bash shell to the running container:

docker exec -it micronaut-mysql bash
bash-4.4# mysql -u root -p
Enter password: 
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 31
Server version: 8.2.0 MySQL Community Server - GPL

Copyright (c) 2000, 2023, Oracle and/or its affiliates.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> show schemas;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| sys                |
+--------------------+
5 rows in set (0.01 sec)

mysql> create schema micronaut;
Query OK, 1 row affected (0.01 sec)

mysql> show schemas;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| micronaut          |
| mysql              |
| performance_schema |
| sys                |
+--------------------+
5 rows in set (0.01 sec)

Going back to the application.yml file, we configure Micronaut to use this database:

datasources:
 default:
   password: "micronaut"
   username: root
   db-type: mysql
   dialect: MYSQL
   url: "jdbc:mysql://localhost:3306/micronaut"
   driver-class-name: "com.mysql.cj.jdbc.Driver"

Also, we want to leverage Micronaut's FlywayDB capabilities to read our JPA entity classes and create the appropriate database tables.

flyway:
 enabled: true
 datasources:
   default:
     enabled: true
     schemas: "micronaut"

jpa.default.properties.hibernate.hbm2ddl.auto: update

Starting the microservice

The following examples leverage IntelliJ IDEA. As such, the Run Configuration for our Micronaut service would look like this:

Notice the “Environment variables” value, which points to the same file we created in the terraform output command in the previous section. IntelliJ IDEA will parse these key-value pairs into environment variables, satisfying the environment variable replacement used in the application.yml file for values like the bootstrap servers, schema registry URL, and credentials for Confluent Cloud.

Events and data

With the application running, let’s exercise the endpoints in the microservice. Once started, the default host and port is localhost:8080—that’s the root of our requests going forward. The controllers include a random PUT endpoint to generate events to be posted to Kafka. Here’s an example of posting a random OrderChangeEvent:

curl --location --request PUT 'http://localhost:8080/orders/random'

After executing this random endpoint several times, we can see the data in the Confluent Cloud console:

And the Data Contracts tab shows the Avro schema of our event data:

Unit testing

Why even bother with a framework if it doesn’t allow developers to create reliable, deterministic unit tests? Micronaut has you covered in this department, with bindings for the most popular test frameworks on the JVM: Spock, JUnit 5, and Kotest (for the Kotlin developers in the house).

Micronaut provides hooks to Testcontainers to test integrations with external systems—like databases, messaging systems, and the like. And yes, that includes Apache Kafka. With Testcontainers, your code is actually exercising the client libraries' functionality and drivers' functionality to interact with a containerized instance of that external system. And here’s how…

In our example, let’s create a test class for the OrdersController, which has a PUT endpoint to send an OrderChangeEvent to a Kafka topic. We want our test to validate that the event was actually produced. What better way than to consume the events and assert that the event is what we expected the code to produce?

Here’s the beginning of our test class:

@MicronautTest(transactional = false, propertySources = {"classpath:application-test.yml"}, environments = {"test"}) // (1)
@Testcontainers(disabledWithoutDocker = true)  // (2)
@TestInstance(TestInstance.Lifecycle.PER_CLASS)
public class OrderControllerTest implements TestPropertyProvider {  // (3)

   static final Network network = Network.newNetwork();  // (4)

   static final ConfluentKafkaContainer kafka = new ConfluentKafkaContainer(DockerImageName.parse("confluentinc/cp-kafka:7.7.0"))  // (5)
           .withNetwork(network);

   GenericContainer<?> schemaRegistry = new GenericContainer<>(DockerImageName.parse("confluentinc/cp-schema-registry:7.7.0"))  // (6)
           .withNetwork(network)
           .withExposedPorts(8081)
           .withEnv("SCHEMA_REGISTRY_HOST_NAME", "schema-registry")
           .withEnv("SCHEMA_REGISTRY_LISTENERS", "http://0.0.0.0:8081")
           .withEnv("SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS",
                   String.format("PLAINTEXT://%s:9093", kafka.getNetworkAliases().get(0)))  // (7)
           .withNetworkAliases("schema-registry")
           .waitingFor(Wait.forHttp("/subjects").forStatusCode(200));

   String schemaRegistryUrl;  // (8)

For starters, our test class is annotated as a @MicronautTest (1), including a reference to a YAML configuration file on the classpath. Next, we use the @TestContainers (2) annotation to control the lifecycle of any containers in this test instance. We implement the TestPropertyProvider (3) to inject the configuration of the containers into the application. (More on that in a bit.)

For our Testcontainers, we create a Network (4) so that the Kafka broker container (5) and Schema Registry container (6) can communicate. Since Confluent Schema Registry uses Kafka for storage, we configure the SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS with the value from the Kafka broker container (7).  The schemaRegistryUrl (8) value will be set later as our test class initializes, and we’ll reuse that in several places.

Implementing the TeatPropertyProvider interface means we have to implement getProperties(), and we do this with values from our running containers. The resulting Map<String, String> is injected at the startup of our Micronaut application:

@Override
public @NonNull Map<String, String> getProperties() {
   if (!kafka.isRunning()) {
       kafka.start();
   }

   if (!schemaRegistry.isRunning()) {
       schemaRegistry.start();
   }

   schemaRegistryUrl = String.format("http://%s:%d", schemaRegistry.getHost(), schemaRegistry.getMappedPort(8081));

   return Map.of(
           "kafka.bootstrap.servers", kafka.getBootstrapServers(),
           "schema.registry.url", schemaRegistryUrl,
           "kafka.consumers.order-changes.specific.avro.reader", "true",
           "kafka.consumers.order-changes.schema.registry.url", schemaRegistryUrl,
           "kafka.producers.order-changes.schema.registry.url", schemaRegistryUrl
   );
}

We use the REST-Assured API to test our controller's methods. Using a data faker, we generate a randomly populated OrderChangeEvent. Since the PUT endpoint accepts JSON data in the body, we must serialize the Avro object as JSON. (See the toJson() method.)

@Test
public void testPutChangeEvent(RequestSpecification spec) {

   Faker faker = new Faker();
   Options types = faker.options();

   final long orderId = faker.number().numberBetween(1L, 999999L);
   OrderItem item = OrderItem.newBuilder()
           .setOrderId(orderId)
           .setProductId(UUID.randomUUID().toString())
           .setDescription(faker.lorem().word())
           .setQuantity(1)
           .setUnitPrice(faker.number().randomDouble(2, 9, 100))
           .build();

   OrderChangeEvent event = OrderChangeEvent.newBuilder()
           .setOrderId(orderId)
           .setEventType(types.option(OrderChangeEventType.class))
           .setItem(item)
           .build();

   eventConsumer.subscribe(List.of("order-changes-avro"));

   spec.given()
           .header("Content-Type", "application/json")
           .body(toJson(event))
           .when()
           .put("/orders/change/{id}", orderId)
           .then()
           .statusCode(201);

   await().atMost(Duration.ofSeconds(10)).untilAsserted(() -> { // try at most 10 times to...
       // poll kafka for 2 seconds...
       ConsumerRecords<Long, OrderChangeEvent> records = eventConsumer.poll(Duration.ofSeconds(2));
       // assert there is a record from the `order-changes-avro` topic with a key == the orderId value we sent.
       assertEquals(1L, StreamSupport.stream(
               records.records("order-changes-avro").spliterator(), false)
               .filter( r -> r.key() == orderId).count());
   });
}

private static <T extends SpecificRecord> String toJson(T avroObject) {
   DatumWriter<T> writer = new SpecificDatumWriter<>(avroObject.getSchema());
   try (ByteArrayOutputStream stream = new ByteArrayOutputStream()) {
       Encoder jsonEncoder = EncoderFactory.get().jsonEncoder(avroObject.getSchema(), stream);
       writer.write(avroObject, jsonEncoder);
       jsonEncoder.flush();
       return stream.toString();
   } catch (IOException e) {
       e.printStackTrace(System.err);
   }
   return null;
}

With a REST-Assured RequestSpecification, we create a request with the appropriate header(s) and JSON-encoded body. Then we use the appropriate full URI to the PUT endpoint with path variable substitution. First, we assert there was a 201 HTTP response. Then, using a KafkaConsumer instance, we poll the order-changes-avro topic, filtering the resulting ConsumerRecords until we get an event whose key matches the orderId value sent to the HTTP endpoint. The Awaitility SDK is very useful for these asynchronous test scenarios, allowing us to retry for a fixed duration or until the assertion is true.

Next steps

Event-driven microservices based on Apache Kafka have become an industry standard pattern. If you’re Micronaut-curious, we hope you find these code examples useful when integrating with Kafka. Here are some helpful resources to take you further in this journey:

The examples in this post are available in the micronaut-cc directory of our demo-scene repository. Feel free to leave feedback in GitHub or on social media channels.

Apache®, Apache Kafka®, Kafka®, and Apache AvroTM, are trademarks of Apache Software Foundation.

Micronaut® is a trademark of Micronaut Foundation.

RabbitMQ® is a trademark of Broadcom, Inc.

  • Sandon Jacobs is a Developer Advocate at Confluent, based in Raleigh, NC. Sandon has two decades of experience designing and building applications, primarily with Java and Scala. His data streaming journey began while building data pipelines for real-time bidding on mobile advertising exchanges—and Apache Kafka was the platform to meet that need. Later experiences in television media and the energy sector led his teams to Kafka Streams and Kafka Connect, integrating data from various in-house and vendor sources to build canonical data models.

    Outside of work, Sandon is actively involved in his Indigenous tribal community. He serves on the NC American Indian Heritage Commission, and also as a powwow singer and emcee at many celebrations around North America. Follow Sandon on Twitter @SandonJacobs or Instagram @_sandonjacobs, where he posts about his powwow travels, family, golf, and more.

¿Te ha gustado esta publicación? Compártela ahora