[Webinar] How to Protect Sensitive Data with CSFLE | Register Today
What if there is a way for you to create views of data and have them update automatically, enhance operational efforts and reduce work? With Confluent’s event-driven streaming platform, you can deliver your data slices off of always up-to-date sources, whether they’re coming from hundreds or billions of daily events.
In this online talk series, we’ll show you the modern way of integrating data through streaming extract, transform and load (ETL). Join us for three different sessions on how modern streaming ETL delivers event-driven data to businesses, and find out how you can make seamless data integration a reality in your organization.
Join Gwen Shapira, Apache Kafka® committer and co-author of "Kafka: The Definitive Guide," as she presents core patterns of modern data engineering and explains how you can use microservices, event streams and a streaming platform like Apache Kafka to build scalable and reliable data pipelines designed to evolve over time.
Gwen Shapira
Principal Data Architect, Confluent
In this talk, we'll build a streaming data pipeline using nothing but our bare hands, the Kafka Connect API and KSQL. We'll stream data in from MySQL, transform it with KSQL and stream it out to Elasticsearch. Options for integrating databases with Kafka using CDC and Kafka Connect will be covered as well.
Robin Moffatt
Developer Advocate, Confluent
We’ll discuss how to leverage some of the more advanced transformation capabilities available in both KSQL and Kafka Connect, including how to chain them together into powerful combinations for handling tasks such as data-masking, restructuring and aggregations. Using KSQL, you can deliver the streaming transformation capability easily and quickly.
Nick Dearden
Director of Engineering, Confluent