Prädiktives maschinelles Lernen entwickeln, mit Flink | Workshop am 18. Dezember | Jetzt registrieren

Presentation

Next Gen Data Modeling in the Open Data Platform

« Current 2022

At Yotpo, we have a rich and busy data lake consisting of thousands of data sets ingested and digested by different engines, the main one being Spark. We built our data infrastructure to enable our users to produce and consume data via self-service tooling, giving them the utmost freedom.

This freedom came with a cost.

We had trouble with bad standardization, little data reusability, lack of data lineage, and flaky data sets. We also witnessed the landscape under which we built our platform change dramatically and so have our analytics needs and expectations.

We came to an understanding that the modeling layer should be decoupled from the execution layer in order to get rid of the limitations we were bounded by - Batch and stream should be no more than attributes as part of a wider abstraction A Kafka topic and a data lake table are no different and should be treated the same way Observability of our data pipelines should have the same quality and depth across all execution engines, storage methods, and formats Governance should be an implicit part of our ecosystem to serve as a basis for both exploration and automation/anomaly detection

That's when we started building YODA (soon to be open sourced) that gives us killer dev experience with the level of abstraction we always dreamed of. Combining DBT, Databricks, lakeFS, and a multitude of streaming engines - we started seeing our vision come to life. In this talk, we'll share from our journey redesigning the data lake, and how to best address organizational needs, without having to give up on high-end tooling and technology. We are taking this to the next level.

Related Links

How Confluent Completes Apache Kafka eBook

Leverage a cloud-native service 10x better than Apache Kafka

Confluent Developer Center

Spend less on Kafka with Confluent, come see how