[Webinar] How to Protect Sensitive Data with CSFLE | Register Today

Connect with Confluent Q4 Update: New Program Entrants and SAP Datasphere Hydration

Écrit par

The Connect with Confluent (CwC) Technology Partner Program consistently expands the reach of Confluent’s data streaming platform across an ever-growing landscape of enterprise data systems. In this blog, you’ll meet the latest program entrants who have built fully managed integrations with Confluent and discover new ways to leverage real-time data across your business.

Confluent unifies Apache Kafka® and Apache Flink® on a single platform, enabling teams to stream, process, and govern all their data with ease. With Flink SQL, you can use familiar SQL syntax to develop preprocessed, high-quality data streams tailored for downstream systems, while minimizing redundant tasks. Paired with the overall CwC portfolio—including a new, native sink integration for SAP Datasphere—this unlocks a new, more efficient approach to delivering richer applications, sharper insights, and seamless data-driven operations.

Meet the Q4 2024 Connect with Confluent entrants

This quarter, we are excited to welcome five new additions to the CwC program—each bringing direct, fully managed connectivity to Confluent’s data streaming platform. These partners represent a diverse range of technologies and use cases, each providing unique capabilities that increase the power of both Confluent Cloud and Confluent Platform.

New members to the CwC partner program in Q4 include AWS IoT Core, CelerData, Kong, Lightstreamer, and SAP Datasphere Sink integration.

  • AWS IoT Core: Enrich raw data from IoT devices with information from other business systems in real time – Learn more

  • CelerData: Analyze Confluent data streams with a StarRocks-powered SQL engine directly on your data lakehouse – Learn more

  • Kong: Provide seamless, secure access to data streams across the enterprise through a standardized API layer – Learn more

  • Lightstreamer: Deliver Kafka events to remote users across web and mobile apps with ease and reliability – Learn more

  • SAP Datasphere Sink: Hydrate ERP data from SAP S/4HANA and ECC with information from other business systems in real time – Learn more

Alongside these net-new integrations, CwC partners are regularly making new investments into their Confluent integrations in order to improve the development experience, further simplify operations, and open new use cases.

This quarter, both the fully managed Elastic Sink Connector and self-managed Elastic Sink Connector have undergone significant updates in response to what we heard from customers working with Confluent data streams on the enterprise search AI platform. Fully customizable target data stream naming, support for custom data stream type, and ability to configure auto-generation of Elasticsearch document IDs for insertion requests are now available for both connectors. Later in this blog we’ll detail how Confluent and Elastic can be paired together for production development of RAG-based GenAI search applications.

The Connect with Confluent partner landscape expands

With these updates, we're excited to unveil the latest version of our CwC partner landscape. This updated graphic highlights our ever-expanding ecosystem, now featuring well over 50 integrations, including the newest partners who have joined us this quarter.

Confluent’s CwC partner program now provides well over 50 fully managed integrations with the most popular applications throughout the larger data landscape.

Unlock new use cases with preprocessed, clean data streams

With Confluent and high-performance Flink stream processing, customers are able to shift left—moving data preparation upstream to eliminate wasteful data proliferation, manual break-fix, and high costs by processing and governing data at the source, within milliseconds of its creation. This allows for the development of higher value, real-time data products for reuse by more teams and more use cases across all downstream systems and applications. Whether landing within a database, data lakehouse, data warehouse, analytics platform, or any other application, data streams can be built once and made ready for immediate, multipurpose use across the business.

Read on to learn about some specific ways preprocessed data products can be used across CwC partner applications.

Hydrate SAP Datasphere with real-time data

The ability to harness data from SAP or any other system in real time is paramount for companies seeking to remain competitive. Confluent unlocks data streaming for SAP customers. Integrated directly with SAP Datasphere, Confluent Cloud provides SAP customers with a simple means of accessing their ERP data within SAP S/4HANA and SAP ECC. This data can then be merged with third-party sources, in real time, to power modern applications, analytics and AI/ML workloads within any downstream application.

But, making the best use of data streams requires a two-way street. Whether it be IoT data from the field, user clickstreams from the web, campaign data from marketing tools, or any other application that helps run your business, connecting and working with this data in real time within SAP Analytics Cloud, or your BI tool of choice, introduces a new level of insights and value rooted in your critical business data.

We’re excited to announce that the Confluent integration with SAP Datasphere is now bidirectional, allowing for real-time streaming of data from Confluent back into SAP Datasphere. Easily sourced with Confluent’s portfolio of 120+ pre-built connectors covering the entire data streaming ecosystem, data from across the entire business can be merged with ERP data from SAP S/4HANA and SAP ECC to develop rich, highly contextualized, real-time data products right within SAP.

Learn how to configure the Confluent to SAP Datasphere integration within SAP’s announcement blog, Replication Flows – Confluent as (an SAP) Replication Source.

Confluent now provides a bidirectional integration with SAP Datasphere, allowing for seamless unification of ERP data and any other dataset throughout every corner of a business.

Build a real-time data foundation to fuel Elastic’s vector database

Integral to the GenAI stack and RAG pipeline development, vector databases can store, index, and augment large datasets in the formats that AI technologies like LLMs require. Leveraging the newly updated integrations, we’ve worked with CwC member Elastic to develop Confluent integrations that fuel Elastic vector search with highly contextualized, AI-ready data streams sourced from anywhere throughout a customer’s business. By leveraging Kafka and Flink as a unified platform with Confluent, teams can clean and enrich data streams on the fly, and deliver them as instantly usable inputs in real time to vector databases.

“AI is only as effective as the data powering it. Without real-time, fresh datasets, even the most advanced AI applications will struggle to deliver accurate, relevant insights,” said Paul Mac Farland, SVP of Partner and Innovation Ecosystem, Confluent. “Seamlessly integrated with Elastic, Confluent’s fully managed data streaming platform—with unified Apache Kafka® and Apache Flink®—allows businesses to build the real-time, always up-to-date data foundation that highly contextualized, production-ready search AI applications require.”

To learn more, check out the webinar, How to Build a Real-Time RAG-Enabled AI Chatbot with Flink, Elastic, OpenAI, and LangChain—you’ll be shown how to build a real-time generative AI chatbot that leverages retrieval-augmented generation (RAG) for accurate and contextually aware responses. We’ll demonstrate this through a real-world use case: financial services document search and synthesis. Banks spend valuable time researching and summarizing documents—now GenAI makes complex data instantly accessible, helping analysts effectively find and interpret information.

Learn more about Confluent's partnership with Elastic as part of Elastic's AI Ecosystem.

Alongside Elastic, Confluent provides a portfolio of vector database integrations including Couchbase, MongoDB, neo4j, Pinecone, Qdrant, SingleStore, Weaviate, and Zilliz.

Through CwC integrations, teams can easily access governed, fully managed data streams directly within their vector database of choice, making it even easier to fuel highly sophisticated GenAI applications with trusted, real-time data.

Make “real-time everything and everywhere” a reality 

The Connect with Confluent program accelerates innovation by integrating real-time data streams across your most popular data systems, simplifying Kafka management, and enabling widespread adoption of Confluent Cloud’s powerful capabilities. The program provides:

Native Confluent integrations CwC integrations streamline innovation by integrating Confluent data streams within the world’s most popular data systems, offering fully managed solutions that bypass the complexities of self-managing Apache Kafka. This allows businesses to deliver real-time, low-latency experiences cost-effectively and quickly with Confluent Cloud, recognized as a leader in The Forrester Wave™: Streaming Data Platforms. Confluent Cloud, powered by the Kora Engine, provides elastic scaling, a 99.99% uptime SLA, 120+ connectors, Flink stream processing, stream governance, enterprise-grade security, and global availability—all reducing Kafka TCO by up to 60%.

New data streaming users Through these integrations, businesses can easily expand data streaming beyond Kafka experts, fostering organic growth of real-time use cases. Native integrations make it simple for teams to access and utilize high-value data, eliminating the need for cross-functional efforts to manage Kafka or develop new integrations.

More real-time data Each new CwC integration allows customers to instantly share data products across Confluent’s expansive data streaming network, enhancing the value of data within any system by making it accessible to all applications across the business.

Find and configure your next integration

Ready to get started? Check out the full library of Connect with Confluent partner integrations to easily integrate your application with fully managed data streams.

Not seeing what you need? Not to worry. Check out our repository of 120+ pre-built source and sink connectors including 80+ provided fully managed.

Are you building an application that needs real-time data? Interested in joining the CwC program? Become a Confluent partner and give your customers the absolute best experience for working with data streams—right within your application, supported by the Kafka experts.

  • Greg Murphy is the Staff Product Marketing Manager focused on developing and evangelizing Confluent’s technology partner program. He helps customers better understand how Confluent’s data streaming platform fits within the larger partner ecosystem. Prior to Confluent, Greg held product marketing and product management roles at Salesforce and Google Cloud.

Avez-vous aimé cet article de blog ? Partagez-le !