Level Up Your Kafka Skills in Just 5 Days | Join Season of Streaming On-Demand

Presentation

Analyzing Petabyte Scale Financial Data with Apache Pinot and Apache Kafka

« Kafka Summit Europe 2021

At Stripe, we operate a general ledger modeled as double-entry bookkeeping for all financial transactions. Warehousing such data is challenging due to its high volume and high cardinality of unique accounts. aFurthermore, it is financially critical to get up-to-date, accurate analytics over all records. Due to the changing nature of real time transactions, it is impossible to pre-compute the analytics as a fixed time series. We have overcome the challenge by creating a real time key-value store inside Pinot that can sustain half million QPS with all the financial transactions.

We will talk about the details of our solution and the interesting technical challenges faced.

Related Links

How Confluent Completes Apache Kafka eBook

Leverage a cloud-native service 10x better than Apache Kafka

Confluent Developer Center

Spend less on Kafka with Confluent, come see how