[Webinar] Shift Left to Build AI Right: Power Your AI Projects With Real-Time Data | Register Now

The Business Value of the DSP: Part 1 – From Apache Kafka® to a DSP

作成者 :

In our 2024 Data Streaming Report, we surveyed 4,110 IT leaders about the pivotal role that data streaming plays in their businesses. Of the respondents, 91% indicated that data streaming platforms (DSPs) are critical or important for achieving their data-related objectives—a result that comes as no surprise. Most IT teams—from architecture and integration teams to application and data engineers—that are using Confluent’s DSP get it. They know the technical benefits of data streaming inside and out.

That said, IT teams often need to convince non-technical, business stakeholders of the value of a DSP. They may need to build a compelling business case and financial justification for new or additional investments. That’s where the conversation can become challenging.

This two-part series will help bridge that communication gap. Part 1 will trace Confluent's evolution from its origins with Apache Kafka® to today's comprehensive DSP. Part 2 will focus on helping IT teams articulate the business value of a DSP to key stakeholders, outlining how the platform drives measurable outcomes that matter, including:

  • Driving revenue

  • Reducing costs

  • Mitigating risks

A Quick History of Confluent and the Evolution to a DSP

The Confluent DSP has been recognized as the market-leading platform by multiple independent analysts. To understand what the Confluent DSP is, what it does, and how it stands apart, let’s reflect on Confluent’s journey.

First, the roots of Confluent’s DSP can be traced back to LinkedIn in 2010 when a team including Jay Kreps, Jun Rao, and Neha Narkhede envisioned a system that could meet LinkedIn's real-time data infrastructure needs. They subsequently created Kafka and open sourced it to the Apache Software Foundation in early 2011. This team went on to found Confluent in 2014. Now Confluent’s decade-old journey can broadly be categorized into three distinct “acts,” with each representing a significant step forward in delivering value for our customers and across the data streaming landscape.

‎ 

Act 1: Confluent Platform – 2014 to 2017

In our early days, we created Confluent Platform as a better way to self-manage open source Apache Kafka. The focus was on supporting customers with additional product features such as security, monitoring, and replication. Our value proposition was simple but powerful: providing expert support alongside a “buy versus build” option that reduced the complexity of setting up and managing Kafka.

During Act 1, Forrester completed a total economic impact (TEI) assessment of the value-added benefits of Confluent, distinct from the underlying value of Kafka. The study suggested that the value of Confluent Platform for a typical customer in 2018 was as follows:

  • Reduced developer and management cost of $2.4M

  • Accelerated business enablement of $3.8M

  • An overall return on investment (ROI) of more than 200%

Act 2: Confluent Cloud and Bring Your Own Cloud – 2017 to 2024

As customer needs grew, so did we. In response to the increasing demand for better performance, increased elasticity, pay-as-you-go models, and simplicity and scalability in a Software-as-a-Service–like offering, we launched Confluent Cloud—the easiest, fastest, and most reliable way to run Kafka in any public cloud. As our products evolved, we quickly became the most popular and most cost-effective solution for managing Kafka. Our value proposition was mostly about letting Confluent manage the complexity of setting up and operating Kakfa, resulting in:

  • Reduced development and operational effort

  • Elimination of infrastructure hassles

  • Mitigation of risks

During Act 2, Forrester completed a TEI report for Confluent Cloud, finding that a typical customer saves approximately $2.5M while delivering an overall ROI of 257%. On average, customers observed:

  • Development and operations cost savings of more than $1.4M

  • Scalability and infrastructure cost savings of more than $1.1M

Alongside Confluent Platform and Confluent Cloud, we also added a Bring Your Own Cloud option to our product portfolio to serve customers who wanted a cloud-native streaming offering in their own cloud accounts. The overall result from Acts 1 and 2? A drastically lower TCO for Kafka and the flexibility to run in any combination of on-premises, hybrid cloud, and multicloud deployments.

For those wanting to understand our Acts 1 and 2 value proposition further, I recommend this blog series:

Fast forward to today: We’ve spent more than a decade listening to our customers and understanding their growth requirements, which brings us to Act 3.

Act 3: DSP – 2024 and Beyond

Just as we pioneered and set the standard for what streaming should be with our industry-leading Kafka offering, we’ve now earned the right to define what a DSP truly is. Organizations need more than Kafka alone. They need a holistic platform that solves many of their data challenges, addressing the needs for processing and governance of streaming data. That’s why we’ve transformed into a true platform company, defining the DSP.

We believe a DSP must encompass the following core components:

  1. Stream. Streaming is the heart of the DSP. Kafka has become the industry standard, and we’ve taken it further with the Kora engine—a fully managed, cloud-native service that delivers unparalleled performance, elasticity, resilience, and security, taking data streaming in the cloud to the next level.

  2. Connect. The DSP must integrate seamlessly into the existing data systems landscape. While Kafka Connect gets you part of the way there, Confluent offers 120+ prebuilt connectors with enterprise-grade security, reliability, and support.

  3. Process. Any data or event by itself has some innate value, but its true potential emerges when it no longer exists in isolation. An important step toward building reusable data is to increase the value of that single event or data point by enriching it on the fly with other data and event streams to make it instantly consumable across other use cases. That’s why it’s important to process (filter, join, and enrich) data and build applications on it while it’s in motion. Confluent has invested in Apache Flink®, the market-leading stream processing engine. We can also enable streamlined, bidirectional data flow between Confluent’s Tableflow—which converts Kafka logs into Delta Lake tables—and Databricks’ Unity Catalog. This integration unlocks real-time, governed data products from any source to power intelligent applications.

  4. Govern. To help manage data stream quality and lineage while enabling teams to share data streams faster without bypassing controls for data quality or compliance, the platform also requires robust governance capabilities. Confluent has built the industry's first governance solution for data streams, which allows users to understand ownership and access trusted data using self-service capabilities. With features such as Stream Quality, Stream Catalog, Stream Lineage, and Data Portal, Confluent’s Stream Governance suite fosters the collaboration and knowledge-sharing necessary to become a data-centric business while remaining compliant with ever-evolving data regulations.

In summary, our definition of the DSP is “a platform that connects, streams, processes, and governs data in motion.” The DSP also offers Delta Lake-first integration between Confluent and data lakehouses—such as Databricks—that will enable businesses to bridge the divide between real-time applications and their analytics and artificial intelligence (AI) platforms. This will be crucial as business intelligence transforms into AI.

Looking Ahead: From Technical Evolution to Business Transformation

As we trace Confluent's journey from self-managed Kafka to today's comprehensive DSP, we see a clear progression from solving specific technical challenges to enabling enterprise-wide data transformation. The early days focused on making Kafka more manageable, followed by making it more accessible through cloud offerings. When assessing the value of Confluent, we separated Confluent value from the underlying value of Kafka. Now, as we enter Act 3, the DSP represents something far more fundamental—a complete reimagining of how enterprises can manage, process, and derive value from streaming data.

Confluent and the DSP is so much more than Kafka, Flink, or a “better way of managing an open source product.” The DSP offers a fundamental new way of managing data across the modern enterprise. Data streaming has moved far beyond its technical origins to become a cornerstone of modern business transformation, and the potential value of Confluent is now measured in the tens or hundreds of millions of dollars.

This evolution mirrors the growing strategic importance of setting data in motion in the enterprise. And IT leaders are on board, with 86% of organizations citing the DSP as a strategic or important priority for IT investments. In Part 2 of this series, we'll explore how to translate this technical evolution into business value, providing frameworks and approaches to help technical teams communicate the strategic importance of DSP investments to business stakeholders. We'll examine how DSPs help drive revenue growth, reduce operational costs, and mitigate risks across the enterprise—helping bridge the gap between technical capabilities and business outcomes.

‎ 

Apache®, Apache Kafka®, Kafka®, Apache Flink®, Flink®, are registered trademarks of the Apache Software Foundation. No endorsement by the Apache Software Foundation is implied by the use of these marks.

  • Lyndon joined Confluent in 2017 and is a Director of Strategic Customer Advisory. He helps organizations model how Confluent and the data streaming platform can improve company performance. Prior to joining Confluent, Lyndon built the Business Value practice at Acquia, a digital experience platform company (2014–2017). He was on the founding team of a UK startup with an enterprise infrastructure platform and analytics solution (2009–2014). Before that, Lyndon spent 12 years with Accenture’s IT Strategy practice (1996–2008). Lyndon holds a Master of Science in neuroscience, including neural networks and artificial intelligence, from Oxford University. He runs his own blog on all things digital and data: https://lyndon.london.

このブログ記事は気に入りましたか?今すぐ共有