[ウェビナー] Confluent と AWS を使用して生成 AI スタックを構築 | 今すぐ登録

Demo

Demo: How to use Confluent for streaming data pipelines

In 13 minutes, this demo will showcase how to use Confluent as a streaming data pipeline between operational databases. We’ll walk through an example of how to connect data and capture change data in real-time from a legacy database such as Oracle to a modern cloud-native database like MongoDB using Confluent.

We’ll look at how to go about:

  1. Streaming and merging customer data from an Oracle database and credit card transaction data from RabbitMQ.
  2. We will perform stream processing using ksqlDB aggregates and windowing to create a customer list with potentially stolen credit cards.
  3. Finally, we’ll load the results into MongoDB Atlas using the fully managed MongoDB Atlas sink connector, for further analysis.

At the end of this demo, we’ll have run through everything you’ll need to build your first streaming data pipeline.

Helpful resources:

Related Links

How Confluent Completes Apache Kafka eBook

Leverage a cloud-native service 10x better than Apache Kafka

Confluent Developer Center

Spend less on Kafka with Confluent, come see how