Développez l'apprentissage automatique prédictif avec Flink | Atelier du 18 déc. | S'inscrire
A question people often ask about Apache Kafka® is whether it is okay to use it for longer term storage. Kafka, as you might know, stores a log of records, something like this:
The question is whether you can treat this log like a file and use it as the source-of-truth store for your data.
Obviously this is possible, if you just set the retention to “forever” or enable log compaction on a topic, then data will be kept for all time. But I think the question people are really asking, is less whether this will work, and more whether it is something that is totally insane to do.
The short answer is that it’s not insane, people do this all the time, and Kafka was actually designed for this type of usage. But first, why might you want to do this? There are actually a number of use cases, here’s a few:
So is it crazy to do this? The answer is no, there’s nothing crazy about storing data in Kafka: it works well for this because it was designed to do it. Data in Kafka is persisted to disk, checksummed, and replicated for fault tolerance. Accumulating more stored data doesn’t make it slower. There are Kafka clusters running in production with over a petabyte of stored data.
So why does storing data in Kafka cause people so much concern given that it is so obviously built like a storage system?
I think the natural concerns that people have comes from Kafka often being described as a messaging queue. The first two rules of message queues are “you do not store messages in the message queue.” There are a bunch of reasons this doesn’t work in a traditional messaging system:
These are actually big weaknesses in the design of traditional messaging systems. After all, if you think about it, any asynchronous messaging is fundamentally about storing messages, even if it is just for a few seconds until they are consumed. After all, if my service sends a message to some queue and proceeds with its business, yet I want some guarantee that your service will eventually receive and process that message, then something somewhere must store it until your service can do so. So if your messaging system is not good at storing messages then it isn’t good at “queuing” messages either. You might think that this doesn’t matter because you don’t plan on storing messages for very long. But no matter how briefly messages are stored in the messaging system, if the messaging system is under continuous load, there will always be some unconsumed messages being stored. So whenever it does fail, if it has no capability of providing fault tolerant storage, it is going to lose some data. Doing messaging right requires getting storage right. This point seems obvious but is often missed when people evaluate messaging systems.
So storage is important for messaging use cases. But in fact, Kafka isn’t really a message queue in the traditional sense at all—in implementation it looks nothing like RabbitMQ or other similar technologies. It is much closer in architecture to a distributed filesystem or database then to traditional message queue. There are three primary differences between Kafka and traditional messaging systems:
We think of these differences as being enough to make it pretty inaccurate to think of Kafka as a message queue, and instead categorize it as a Streaming Platform.
One way to think about the relationship between messaging systems, storage systems, and Kafka is the following. Messaging systems are all about propagating future messages: when you connect to one you are waiting for new messages to arrive. Storage systems such as a filesystem or a database are all about storing past writes: when you query or read from them you get results based on the updates you did in the past. The essence of stream processing is being able to combine these two—being able to process from the past, and continue processing into the future as the future occurs. This is why Kafka’s core abstraction is a continuous time-ordered log. The key to this abstraction is that it is a kind of structured “file” that doesn’t end when you come to the last byte, but rather, at least logically, goes on forever. A program written to work with a log therefore doesn’t need to differentiate between data that already occurred and data that is going to happen in the future, it all appears as one continuous stream. This combination of storing the past and propagating the future behind a single uniform protocol and API is exactly what makes Kafka work well for stream processing.
This stored log is very much like a file in a distributed file system in that it is replicated across machines, persisted to disk, and supports high-throughput linear reads and writes, but it is also like a messaging system in that it allows many, many high-throughput concurrent writes and has a very precise definition of when a message is published to allow cheap, low-latency propagation of changes to lots of consumers. In this sense it is the best of both worlds.
In implementation, this replicated log is perfectly well suited for use as storage, and that isn’t an accident. In fact, Kafka uses itself as storage, so you can’t avoid it! Internally Kafka stores the offsets that track consumers’ positions in a compacted Kafka topic, and Kafka’s Streams API uses compacted topics as the journal for your application’s processing state. Both of these use cases require permanent storage of the data that is written.
Treating Kafka as a primary store does mean raising the bar for how you run it, though. Storage systems rightly have a huge burden in terms of correctness, uptime, and data integrity. I was involved in building and running several generations of distributed database at LinkedIn, and when a system is treated as the canonical source for data the bar for both software correctness and operational practices increases quite dramatically. We’ve put a lot of effort into correctness for Kafka—we run hundreds of machine hours of distributed torture tests on it every day in addition to thousands of normal unit tests—but there is honestly always more to do. But beyond testing, if you’re running Kafka for this kind of use case, you need to make sure you know how to operate it well, and you need to know the limitation the system has.
This is something where Confluent can help: we offer support and tools for running managing and monitoring Kafka yourself, or a hosted service so you don’t have to.
When I talk about this with other people they sometimes ask if this means Kafka could become a kind of universal database, obsoleting all other storage thingies (obviously I talk to a lot of Kafka fans). The answer is probably not, for two reasons:
First, databases are mostly about queries, and I don’t think Kafka really benefits from trying to add any kind of random access lookups directly against the log. Rather what it is doing is storing a log of the data that can be replicated into any number of databases, caches, stream processors, search thingies, graph stores, and data lakes (not to mention custom applications or SaaS products). Each of these systems have their own pros and cons, and I think it’s hubris to think you can do better than all of them in a single system. These stores typically need to be very very closely tied to their own storage layout (e.g. analytical database have very sophisticated columnar indexing techniques, search indexes keep inverted indexes, caches are in memory, LSMs are good at writes, btrees good at reads, etc).
If Kafka isn’t going to be a universal format for queries, what is it? I think it makes more sense to think of your datacenter as a giant database, in that database Kafka is the commit log, and these various storage systems are kinds of derived indexes or views. It’s true that a log like Kafka can be fantastic primitive for building a database, but the queries are still served out of indexes that were built for the appropriate access pattern.
One of the best examples of this is the interactive queries feature in Kafka’s Streams API. Processing in Kafka Streams apps are just another Kafka consumer, but they can maintain computed state they materialize off the stream. This state can be queried directly or piped out into external systems. This doesn’t involve querying Kafka directly at all, but rather makes it so that stream processing applications can, keep a derived, queryable, materialized view of the data in Kafka and run low latency queries against it. The Kafka cluster stores the log, and the stream processing API stores the materialized view and processes queries against it. You can think of this as a kind of refactoring of the dividing line between the application and the database. This becomes even more interesting as we add KSQL to the picture, a streaming SQL engine for Kafka. With KSQL, you no longer need to write any code but can use SQL statements to continuously transform and compute materialized views off of Kafka (and soon to do queries against those views).
But perhaps another reason Kafka isn’t adding query APIs is that it has a more exciting mission. Our goal with Kafka is to make streams of data and stream processing a mainstream development paradigm and make this type of streaming platform something that acts as a central nervous system for a modern digital business. I think this is actually a much more interesting than building the 1001st database. I think this type of streaming platform is going to be essential to how data moves and is processed in a modern enterprise, and how modern real-time applications are built. There is a ton of work to do to make this a reality. So we’re focusing on that first.
If you’re looking for the fastest way to run Apache Kafka, you can sign up for fully managed Apache Kafka as a service with Confluent Cloud and receive $400 of free usage during your first 60 days, plus an additional $60 of free usage when you use the promo code CL60BLOG.*
Adding queue support to Kafka opens up a world of new possibilities for users, making Kafka even more versatile. By combining the strengths of traditional queue systems with Kafka’s robust log-based architecture, customers now have a solution that can handle both streaming and queue processing.
Confluent launches the general availability of a new JavaScript client for Apache Kafka®, a fully supported Kafka client for JavaScript and TypeScript programmers in Node.js environments.