[Webinar] How to Protect Sensitive Data with CSFLE | Register Today
Most of Airbnb's development happens in several large monolithic repositories. With thousands of changes merged every day there is a high probability that changes that pass verification independently fail to integrate with other changes on the mainline.
Evergreen is a system which guarantees serializability of changes. It enforces that any mainline commit passes all automated checks like compilation, static analysis or tests as if they were merged sequentially without overlapping in time. This is made possible by utilizing optimistic concurrency controls and build system topology in order to parallelize the verification of merging changes.
Evergreen uses an actor model built on top of Kafka Streams. At its core is a state machine that applies a pure function on events and transforms them into actions which are then executed by workers that in turn might produce additional events. We found this architecture particularly well fitted for Kafka Streams with its exactly-once-processing, load balancing capabilities and minimal dependencies. However using Kafka Streams was not without its challenges: one-record-at-a-time processing quickly became a bottleneck and it was not easy to replay records in order to debug the service.
In our talk we will explore Evergreen's architecture and share our learnings from utilizing Kafka Streams in a mission critical system.