Ahorra un 25 % (o incluso más) en tus costes de Kafka | Acepta el reto del ahorro con Kafka de Confluent

New in Confluent Cloud: Tableflow, Freight Clusters, Apache Flink® AI Enhancements, and More

Escrito por

Our Q1 Confluent Cloud launch comes to you from Current Bengaluru, where data streaming industry experts have gathered to showcase how real-time streaming with Apache Kafka®, Apache Flink®, and Apache Iceberg™️ is enabling generative artificial intelligence (AI) use cases and helping their organizations take innovation to the next level. Today marks a significant milestone as we announce the availability of several major products, including Tableflow and Freight clusters, that are designed to make feeding your data lake easier and high-throughput streaming cheaper. We're also excited to introduce AI enhancements to Confluent Cloud for Apache Flink® as well as our developer-friendly Visual Studio Code extension and our Oracle XStream CDC Premium Connector, plus additions to our Enterprise clusters that will bring you even more security and cost savings.

In case you missed them, check out the latest updates from Confluent Platform 7.9 for more information on how we're enhancing the on-premises experience. Keep reading for a full breakdown to get the most out of our new cloud features.

Join us on April 2 for the Q1 Launch webinar and demo to see these new features in action.

Tableflow Is Now Generally Available!

We’re excited to announce the general availability of Tableflow. As stated in our vision, Tableflow represents Kafka topics and associated schemas as Apache Iceberg™️ (generally available) or Delta Lake (Early Access) tables to feed any data lake, warehouse, or analytics engine.

With Tableflow, users can:

  • Simplify the process of representing Kafka topics as Iceberg or Delta tables to form bronze and silver tables with automated data maintenance and strong read performance

  • Store fresh, up-to-date Iceberg or Delta tables once; reuse many times with their own compatible storage, ensuring flexibility, cost savings, and security

  • Leverage our commercial and ecosystem partners to transform bronze or silver tables into gold standard tables for a wide range of AI and analytics use cases

‎ 

Tableflow removes erroneous, duplicative work and helps convert Kafka topics and associated schemas to Iceberg or Delta Lake tables in a few clicks. It also integrates with Iceberg and Delta Lake catalogs such as AWS Glue, Databricks Unity (coming soon), and Snowflake Open Catalog while offering a built-in Iceberg REST Catalog. These integrations streamline access for analytical engines such as Athena, EMR, Redshift, SageMaker, Databricks, and Snowflake. Tableflow’s Iceberg and Delta tables are also accessible by leading data lake and warehouse solutions such as Dremio, Imply, Onehouse, and Starburst and open source software (OSS)-compatible technologies such as Apache Spark™️, Flink, and Trino.

To deep dive into the latest features and what's coming next, see our Tableflow blog.

To try Tableflow with Delta Lake tables and help shape our product roadmap, sign up for Early Access.

Trading Some Latency for Cost Savings Using Freight Clusters

Confluent Cloud now offers cost-effective cluster types tailored to any streaming use case. For some workloads, where low latency isn't a strict requirement, organizations can achieve substantial cost savings by opting for a solution such as Freight clusters—a more economical option that doesn’t compromise overall performance and reliability.

We're excited to announce that Freight clusters are now generally available on Amazon Web Services (AWS), providing up to 90% in cost savings. By leveraging our Kora engine and its next-generation “direct write” capabilities, Freight clusters bypass expensive replication to multiple availability zones and local storage. Instead they write directly to object storage, such as Amazon S3, delivering significant savings on networking costs. When coupling this with Freight’s autoscaling capability, users don’t have to over-provision resources or waste capacity, which leads to additional cost savings.

With Freight clusters, you can:

  • Optimize high-throughput, relaxed-latency workloads like logging, telemetry, and feeding batch analytics pipelines

  • Lower total cost of ownership by up to 90% by removing the need for replication to multiple availability zones

  • Scale automatically to meet your needs with elastic Confluent Units for Kafka (eCKUs), ensuring that you pay only for what you use

Freight is a cost-effective, fully managed option for use cases that don’t require sub-millisecond latency, such as logging, monitoring, and similar workloads. While Enterprise clusters are designed for high-performance, low-latency scenarios, Freight allows companies to reduce costs by trading some latency for savings. It complements our other cluster types, offering organizations the flexibility to mix and match based on their specific use case and workload requirements and helping to optimize both performance and budget.

Making Enterprise Clusters Safer and Bigger

BYOK on AWS

We are excited to extend Bring Your Own Key (BYOK) support to Enterprise clusters on AWS, making it possible for organizations to leverage the scalability, elasticity, and private networking features of this serverless cluster type while maintaining their security and compliance posture. This is on top of existing support for self-managed keys on dedicated clusters on AWS, Azure, and Google Cloud.

With BYOK support for Enterprise clusters, you can:

  • Meet critical security and compliance requirements with an additional layer of data protection often required by highly regulated industries like healthcare, finance, and more

  • Ensure granular access control without compromising performance, leveraging both robust encryption and the scalability, elasticity, and private networking capabilities of Enterprise clusters

  • Revoke access to encryption keys to prevent access to data at rest in case of a security breach

32 eCKU Maximum Capacity (Limited Availability)

Currently, Enterprise clusters support up to 10 eCKUs per cluster. Each eCKU represents a collection of capacity across multiple dimensions to support your workloads, including throughput, partitions, and client connections. Now we’re taking Enterprise clusters to the next level to support any workload.

For workloads that require greater than 10 eCKUs (1.2 GB/s of combined throughput), Confluent is increasing the maximum capacity of Enterprise clusters, offering clusters that will scale up to 32 eCKUs (more than 7.5 GB/s of combined throughput and 96,000 partitions), nearly three times the current maximum capacity per cluster. With this increased capacity, organizations will be able to handle peaks in demand without compromising performance while also having the flexibility to scale back down to zero when demand decreases, ensuring cost efficiency.

To try out the new 32 eCKU maximum capacity, sign up for the Limited Availability program today.

Apache Flink® AI Enhancements

Flink Native Inference (Early Access)

Building on our previously released Flink AI Model Inference, which enabled you to remotely call proprietary models such as OpenAI directly within Flink SQL, we’re excited to announce the launch of Flink Native Inference. Native Inference further simplifies AI integration by adding support for open source (i.e., Meta Llama series) and fine-tuned models as a fully managed service. Beyond expanding the types of models you can now use as first-class resources within Flink, Native Inference also ensures data security and reduces your costs. Your sensitive data stays within Confluent Cloud during inference and isn’t shared with third-party model providers. This also eliminates cloud ingress and egress costs. Graphics processing unit resources are shared and managed by Confluent, ensuring cost-efficient processing.

Flink Search (Early Access)

As a part of real-time retrieval-augmented generation (RAG) architectures, data needs to be prepared with additional context prior to prompting the large language model. Flink search provides a streamlined way to query across various databases within Flink SQL, bypassing intricate ETL processes or manual consolidation. For Early Access, we’re supporting vector searches from vector databases such as MongoDB, Elasticsearch, and Pinecone. This improves data contextualization for AI applications and ensures data readiness with timely, contextually enriched information. By leveraging both Flink search and Native Inference, you can seamlessly orchestrate RAG, prompting your models with the most up-to-date data that boosts the accuracy and relevance of responses for better decision-making.

Built-In ML Functions (Early Access)

We’re excited to introduce two new, pre-built time-series forecasting and anomaly detection SQL functions for streaming data, enabling data engineers and software developers to derive real-time insights with machine learning (ML). Built-in ML functions transform the way teams interact with advanced ML models by simplifying complex data science tasks into Flink SQL, providing a familiar yet powerful way to apply AI to streaming data. These functions enable real-time analysis, reduce complexity, and speed up decision-making by delivering insights immediately as the data is ingested. Proactively predicting trends and detecting anomalies is crucial for a variety of use cases, including operational monitoring, financial forecasting, Internet of Things analytics, and retail analytics.

To try out these Flink AI features, sign up for the Early Access program today.

Confluent for VS Code

After working with many customers and early adopters to make this the ultimate tool for Kafka developers, we’re thrilled to announce the general availability of Confluent for VS Code. This extension for VS Code works with your Confluent environments to comprehensively support Kafka streaming workflows and tools. With this extension, developers can connect to any Kafka cluster to develop, manage, debug, and monitor real-time data streams without switching between multiple tools. Confluent for VS Code also integrates with Confluent Schema Registry to further simplify schema management and ensure consistency across all your data streams.

Our platform team has found the ability to browse messages via Confluent Cloud for VS Code to be incredibly beneficial, not only for us but also for other product teams involved in producing or consuming messages. The extension addressed an immediate need for our team by enabling us to view and search through large volumes of events.” 

— Bill Overton, Product Owner at Eaton

By integrating Confluent with VS Code, Kafka developers can:

  • Accelerate project setup and ensure consistency across all development efforts with ready-to-use templates

  • Easily create and edit Kafka topics with built-in schema association, giving clear visibility to produce Kafka messages to topics and iterate on existing schemas

  • Code and debug in one place with visibility into Confluent resources to easily search, filter, and compare Kafka messages in real time

Accelerate development cycles and simplify real-time data processing—all without leaving your integrated development environment.

Whether you're building event-driven applications or optimizing data pipelines, this extension makes real-time development faster, smoother, and more intuitive.

Install the extension from the VS Code Marketplace to get started. Want to learn more? Check out the announcement blog and tune into Streaming Frontiers for a livestream demo.

Oracle XStream CDC Premium Connector

Today, we're excited to announce our fully managed Oracle XStream CDC Premium Connector, with general availability coming soon. This connector delivers enterprise-grade performance, scalability, and reliability for real-time, event-driven streaming architectures. With this connector, organizations can cost-effectively stream high-value operational data from Oracle databases to modern platforms, enabling real-time analytics, event-driven applications, and automated workflows.

Unlike traditional change data capture (CDC) solutions that suffer from throughput limitations and performance degradation during high-volume transactions, the XStream-based connector is purpose-built for high-throughput, low-latency streaming, ensuring minimal impact on source databases at scale and lowering total cost of ownership.

With the Oracle XStream CDC Premium Connector, organizations can:

  • Achieve high performance and reliable streaming of change events, delivering two to three times improvement in throughput and latency

  • Reduce total cost of ownership with simplified licensing and lower operational overhead

  • Unlock real-time operational data to create reusable data products for downstream systems

Organizations can leverage XStream technology without requiring separate Golden Gate licensing while gaining access to enterprise features out of the box, including state management, performance monitoring, and Native RAC support. This improves cost-effectiveness, eliminates operational burden, and accelerates time to value. You can also leverage our other 80+ fully managed connectors to build streaming data pipelines that send fresh Oracle data to modern, downstream data systems such as Snowflake, MongoDB, and Google BigQuery.

For a deeper dive into the connector, read more here.

Additional New Features and Updates

Cluster Linking Now Supports Enterprise Clusters and Azure Private Networking

We’re excited to extend Cluster Linking even further, with support for Enterprise clusters on AWS and Azure (coming soon) and Dedicated clusters with private networking on Azure.

By extending Cluster Linking to Enterprise clusters, you can simplify migration to Confluent Cloud from a wide range of deployments, including self-managed Kafka clusters and Confluent Platform. This makes it significantly easier to adopt Enterprise clusters and benefit from the resulting elasticity and cost savings. Meanwhile, Azure private networking for Dedicated clusters enables you to create secure and private links between Azure-hosted Dedicated clusters in different regions to address your data-sharing needs while meeting critical security requirements.

Confluent Cloud for Government (Early Access) Is FedRAMP Ready

Confluent Cloud for Government (Early Access) is now FedRAMP Ready, marking a significant milestone toward full FedRAMP authorization. FedRAMP Ready status signifies Confluent’s continued commitment to rigorous security standards put forth by the federal government. With Confluent Cloud for Government, federal agencies can not only ensure compliance with stringent security standards but also leverage the scalability and reliability of the cloud as well as the zero maintenance and operations inherent in a fully managed offering. Sign up for the Early Access program today.

ServiceNow Source V2 Connector

We’re excited to announce the general availability of an enhanced ServiceNow Source connector to align with evolving enterprise needs. Our fully managed ServiceNow Source V2 connector delivers support for up to five ServiceNow tables, secure OAuth Client Credentials Grant flow authentication, and advanced querying capabilities.

Flink Private Networking for Azure

In last quarter’s Confluent Cloud launch, we introduced private networking for Flink on Azure, initially rolling it out across several key regions. We are now excited to announce that private networking support for Flink is available in all 12 Azure regions where Flink is available, extending this powerful capability to a global scale. This enhancement broadens security and compliance options for Azure-based Flink deployments, ensuring consistent and protected stream processing operations.

New Flink SQL Workspace Features

This release introduces several user interface enhancements to Flink SQL Workspaces, designed to improve productivity and streamline workflows.

First, instant SQL validation makes it possible for you to quickly detect errors in Flink SQL statements before submission, reducing debugging cycles.

Time-series visualization enables developers to seamlessly explore time-series data in chart mode with customizable views.

And finally, tabbed workspaces streamline workflows by organizing related SQL statements into tabs that retain their state across visits.

To learn more about new features, check out the Confluent Developer newsletter.

Apache Kafka 4.0 is here!

We’re excited to announce the release of Apache Kafka 4.0, which contains many new improvements and features. This new release is a major milestone, with major updates such as the completed transition from Zookeeper to KRaft, consumer rebalances getting faster and simpler, and more! Read more in the blog.

Partner Program Updates

Confluent Cloud Now Supports Jio Cloud

We’re thrilled to announce the general availability of Confluent Cloud in the Jio India West region, marking the exciting beginning of our multiyear strategic partnership with Jio Platforms Limited. With Confluent Cloud on Jio Cloud, we’re providing Indian businesses with the tools they need to unlock real-time insights, improve operational efficiency, and deliver superior customer experiences. This offering is available immediately to all Jio Cloud customers. Visit the Jio Cloud documentation to get started.

Confluent Extends Its Partnership With Databricks

We’re excited to announce an expanded partnership between Confluent and Databricks to ensure that organizations can unlock the benefits of real-time AI. This partnership strengthens our vision to unify operational and analytical estates with real-time, trustworthy data. Learn more in the Databricks announcement blog.

New CwC Integrations to Fuel Real-Time AI and Analytics

Our Connect with Confluent (CwC) program provides a portfolio of more than 50 fully managed integrations with Confluent, helping customers discover new ways to leverage real-time data across their businesses. This quarter, CwC introduced three new integrations—Amazon Redshift, SAS, and Vectara—which further broaden the reach and influence of Confluent’s data streaming ecosystem. We also have two updated integrations,—HiveMQ and Onibex—simplifying access to data streams and the development of real-time applications.

Meet the Q1 2025 Connect with Confluent entrants

Confluent’s CwC partner program now provides more than 50 fully managed integrations with the most popular applications throughout the larger data landscape.

Check out the CwC Q1 announcement blog to learn more about these integrations, how they can fuel your AI and analytics applications with real-time data, and our current partner landscape.

Start Building With New Confluent Cloud Features

If you’re new to Confluent, sign up for a free trial of Confluent Cloud and create your first cluster to explore the new features. New signups receive $400 to spend within Confluent Cloud during their first 30 days. Use the code CCBLOG60 for an additional $60 of free usage.*

‎ 

The preceding outlines our general product direction and is not a commitment to deliver any material, code, or functionality. The development, release, timing, and pricing of any features or functionality described may change. Customers should make their purchase decisions based on services, features, and functions that are currently available.

Confluent and associated marks are trademarks or registered trademarks of Confluent, Inc.

Apache®, Apache Kafka®, Apache Flink®, and Apache Iceberg™️ are either registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries. No endorsement by the Apache Software Foundation is implied by using these marks. All other trademarks are the property of their respective owners.

Oracle® and XStream are either registered or unregistered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

  • Yashwanth Dasari is a Sr. Product Marketing Manager at Confluent responsible for positioning, messaging and GTM strategy of Confluent Cloud and Confluent Platform Stream, WarpStream and Tableflow. Prior to joining Confluent, Yashwanth was a Management Consultant at BCG advising F500 clients in technology and financial sectors. He also worked as a Software Engineer at Optum and SAP Labs.

¿Te ha gustado esta publicación? Compártela ahora