[ウェビナー] Confluent と AWS を使用して生成 AI スタックを構築 | 今すぐ登録

White Paper

Streaming Pipelines to Databases - Use Case Implementation

電子ブックを入手

Data pipelines do much of the heavy lifting in organizations for integrating and transforming and preparing the data for subsequent use in downstream systems for operational use cases. Despite being critical to the data value stream, data pipelines fundamentally haven’t evolved in the last few decades. These legacy pipelines are holding organizations back from really getting value out of their data as real-time streaming becomes essential.

This whitepaper is an in-depth guide to implementing a solution for connecting, processing, and governing data streams between different databases (RDBMS) such as Oracle, MySQL, SQL Server, PostgreSQL, MongoDB, and more. You'll learn about:

  • Sourcing from databases (including use of Change Data Capture)
  • Sinking to databases
  • Transforming data in flight with stream processing
  • Implementing use cases including migrating source and target systems, stream enrichment, outbox pattern, and more
  • Incorporating Security and Data Privacy, Data Governance, Performance and Scalability, and Monitoring & Reliability
  • Getting assistance where needed

Download the whitepaper today to get started with building streaming data pipelines.

その他のリソース

cc demo

Confluent Cloud Demo

Join us for a live demo of Confluent Cloud, the industry’s only fully managed, cloud-native event streaming platform powered by Apache Kafka
kafka microservices

Kafka Microservices

In this online talk series, learn key concepts, use cases and best practices to harness the power of real-time streams for microservices architectures
Image-Event-Driven Microservices-01

e-book: Microservices Customer Stories

See how five organizations across a wide range of industries leveraged Confluent to build a new class of event-driven microservices