Kafka in the Cloud: Why it’s 10x better with Confluent | Find out more
When needing to source Database events into Apache Kafka, the JDBC source connector usually represents the first choice for its flexibility and the almost-zero setup required on the database side. But sometimes simplicity comes at the cost of accuracy and missing events can have catastrophic impacts on our data pipelines.
In this session we'll understand how the JDBC source connector works and explore the various modes it can operate to load data in a bulk or incremental manner. Having covered the basics, we'll analyse the edge cases causing things to go wrong like infrequent snapshot times, out of order events, non-incremental sequences or hard deletes.
Finally we'll look at other approaches, like the Debezium source connector, and demonstrate how some more configuration on the database side helps avoid problems and sets up a reliable source of events for our streaming pipeline.
Want to reliably take your Database events into Apache Kafka? This session is for you!