[Webinaire] La reprise après sinistre des systèmes basés sur Kafka | Inscrivez-vous dès maintenant

New in Confluent Cloud: Extending Private Networking Across the Data Streaming Platform

Écrit par
  • Hannah Miao Senior Product Marketing Manager, Confluent

As we step into the new year, it’s the perfect time to reflect on the exciting advancements Confluent made in 2024. Our Q4 launch wrapped up the year with a host of powerful features paving the way for even more innovation in 2025. This launch is all about delivering private networking and enhanced security across the data streaming platform. From private networking for our governance and Apache Flink® products to a BYOC-native Schema Registry, we’re ensuring that Confluent’s cloud offerings can meet the evolving needs of every organization.

This launch introduces enhancements across the four key pillars of a data streaming platform—stream, connect, process, and govern—to help customers unlock new possibilities. And in case you missed it, be sure to check out the latest updates from Confluent Platform 7.8 for more on how we're enhancing the on-prem experience. Keep reading for a full breakdown to get the most out of our new cloud features.

Securing everything in the data streaming platform

You needed private networking across everything—whether it’s for connecting your data, maintaining schemas, or securing your Confluent Cloud for Apache Flink® applications—and we heard you loud and clear. That’s why we’ve been working hard to deliver solutions to fit your needs. And the best part? We're just getting started, with even more additions coming your way in the months ahead. Stay tuned!

mTLS authentication for Dedicated clusters

As part of our efforts to enhance security, we’re proud to announce that we’re boosting our authentication methods with the general availability of mTLS authentication for Dedicated clusters, perfect for users who have built their authentication protocols around mTLS and certificates. mTLS provides two-way authentication between clients and Dedicated clusters before any data is exchanged, enhancing data security in transit. With mTLS authentication, you can now define granular access control using RBAC and ACLs to manage client permissions for Confluent clusters based on client certificate metadata.

Learn more about the steps required to configure mTLS authentication on Confluent Cloud in the docs.

AWS PrivateLink for Schema Registry (limited availability)

Security-sensitive organizations may require private networking for their cloud resources, which is why we’re excited to announce the limited availability of AWS PrivateLink for Schema Registry for both Enterprise and Dedicated clusters. This capability enables private networking with Schema Registry using a private endpoint, so users can keep all traffic between clients and Schema Registry within their VPC.

AWS PrivateLink for Schema Registry is now available for production usage. Get started by signing up!

Egress Private Link for connectors on Enterprise clusters

In 2024, we introduced Egress Private Link for Dedicated clusters. This capability makes it possible for customers using AWS PrivateLink or Azure Private Link to leverage fully managed connectors to securely connect to critical data systems within their private networks. Today, we're excited to expand Egress Private Link support to Enterprise clusters on both AWS and Azure, helping users meet their secure networking needs while scaling clusters elastically and cost-effectively.

Private networking on Azure for Confluent Cloud for Apache Flink (limited availability)

In 2024, we announced that Confluent Cloud for Apache Flink supports private networking on AWS for both Enterprise and Dedicated clusters. Building on that foundation, we're thrilled to extend private networking support to Flink on Azure for both Enterprise and Dedicated clusters. This enhancement both strengthens security and compliance across Azure deployments and makes it possible for organizations to ensure compliance with regulations such as GDPR and CCPA.

Check out the list of currently supported cloud regions in the docs—and stay tuned as we extend Flink private networking support to additional regions in Azure in the upcoming weeks.

User-defined functions (UDFs) on AWS

When it comes to stream processing, the built-in operators in Flink SQL cover the basics but often fall short for complex data transformations like custom aggregations and time-based joins. For teams that need highly customizable and scalable solutions to meet their stream processing needs, we’re thrilled to introduce user-defined functions (UDFs) in Confluent Cloud for Apache Flink, giving developers the ability to extend Flink SQL with custom logic.

We're taking things a step further with two exciting updates. First, Flink UDFs are now generally available in Java on AWS, making it easy for developers to incorporate custom logic into Flink SQL in their production environments. Second, we’re adding table functions to Flink UDFs to allow for more complex transformations, such as splitting a string by a delimiter or generating multiple rows from a single string.

These enhancements make Flink SQL far more powerful, enabling developers to handle more sophisticated stream processing scenarios. But beyond SQL, Confluent Cloud for Apache Flink as a whole becomes even more developer-friendly. By allowing developers to write custom logic in their preferred programming language, UDFs lower the barrier to entry for Flink and reduce development time.

UDFs allow developers to use familiar programming languages and existing code, promoting efficiency and accuracy in data processing.

Flink UDFs, including both scalar and table functions, are fully integrated into SQL Workspaces, Confluent Cloud’s browser-based interface. Stay tuned as we expand UDF support to Python and additional cloud platforms.

WarpStream updates

WarpStream Orbit

Migrating and replicating to WarpStream just got a whole lot easier. Until now, users needed to implement custom solutions or utilize extra tools and integrations like MirrorMaker to preserve offsets and ensure data consistency when migrating to WarpStream. Enter WarpStream Orbit—a fully managed, offset-preserving replication tool that makes migrating to WarpStream from any Kafka-compatible source, like self-hosted Kafka or other cloud providers, a breeze.

Orbit automatically preserves offsets and offset gaps, consumer groups, ACLs, and cluster configurations. Similar to how Cluster Linking simplifies the replication and migration of data to Confluent Cloud clusters, Orbit simplifies the migration process to WarpStream, ensuring smooth, consistent data replication without complex workarounds.

Orbit writes records with the same offsets as the source.

With Orbit, accessible directly from the WarpStream Console or infrastructure-as-code (via Terraform), users can:

  • Simplify migration to WarpStream with a fully managed solution, eliminating the need for additional infrastructure or tooling 

  • Set up disaster recovery to minimize downtime by instantly switching over to read-replica clusters when facing hardware failures or networking issues

  • Cut costs by up to 24x by moving data into tiered storage through the creation of read replicas that scale to massive read-throughput while ensuring consistent performance

No more manual workarounds or complex setups—just seamless migration and replication. Get the full scoop in the deep dive blog, and see a demo of Orbit via the Orbit product page.

BYOC Schema Registry for WarpStream

A schema registry plays a crucial role in data streaming by allowing teams to set clear, universal data standards that ensure data quality and compatibility. As WarpStream continues to expand its governance features, we’re excited to introduce WarpStream BYOC Schema Registry, making it possible for users to manage schemas directly from within their cloud account.

BYOC Schema Registry leverages WarpStream’s unique data plane and control plane split, ensuring that schemas never leave a customer’s cloud account. This is ideal for organizations that need tighter control over stricter security and compliance requirements, as all schemas are stored in the customer’s environment and object storage buckets. Meanwhile, WarpStream handles scaling the schema registry metadata, including concurrency control and versioning.

Schemas are stored in object storage while the metadata is offloaded to the control plane.

BYOC Schema Registry currently supports Avro, with support for additional formats coming soon. Get all the details in the blog.

Save costs with follower fetching on AWS

As part of our ongoing commitment to helping users optimize costs, we’re pleased to announce that follower fetching is now generally available for organizations using AWS VPC peering. With follower fetching, users can configure their clients to consume from the closest replica in the same AZ rather than a leader replica in a different AZ, cutting out cross-AZ traffic and slashing egress charges on AWS networking bills.

Just make sure your clients are distributed across all AZs and you’ve configured clients to fetch from followers in the local availability zone. Then sit back and enjoy the savings on networking costs without sacrificing performance.

Users can reduce cross-AZ networking costs by configuring Kafka clients to consume from replicas in the same AZ rather than the leader.

Additional new features and updates

Five new Connect with Confluent integrations, including SAP Datasphere hydration

Launched in 2023, our Connect with Confluent (CwC) program provides a portfolio of over 50 fully managed integrations with Confluent, helping customers discover new ways to leverage real-time data across their business. This quarter, CwC welcomed five new integrations—including a native sink integration for SAP Datasphere—which further expand the reach and influence of Confluent’s data streaming ecosystem. With most CwC integrations found and configured directly within your favorite data systems, accessing data streams and building real-time applications has never been easier. 

New integrations to the CwC portfolio in Q4 include AWS IoT Core, CelerData, Kong, Lightstreamer, and SAP Datasphere Sink integration.

Confluent’s CwC partner program now provides well over 50 fully managed integrations with the most popular applications throughout the larger data landscape.

Check out the CwC Q4 announcement blog to learn more about these integrations, how they can solve for your unique use cases, and our current partner landscape.

Freight clusters on AWS (limited availability)

We’re thrilled to announce the limited availability of Freight clusters, a cost-effective cluster type for high-throughput, relaxed latency workloads like logging and observability that delivers up to 90% in cost savings. First introduced at Kafka Summit London in 2024, Freight clusters bypass expensive replication to multiple availability zones (AZs) by writing directly to object storage, such as S3, using Confluent’s Kora engine and its next-gen "direct write" capabilities. By default, Freight leverages a new private networking option, Private Network Interface (PNI), powered by AWS Cross-Account Elastic Network Interface (ENI), that offers low bandwidth costs and high security.

Confluent’s JavaScript Client for Apache Kafka®

We’re excited to announce the general availability of our official JavaScript Client for Apache Kafka—a highly performant, reliable client based on librdkafka. This client is perfect for developers using the unofficially maintained KafkaJS and kafka-node clients in the community today for their Node.js applications. The best part? Your Kafka client is kept future-proof with ongoing Confluent support and regular updates, ensuring your client remains in sync with the latest Apache Kafka developments.

To get started, check out our JavaScript client on NPM and GitHub, and take a look at the deep dive blog post.

Flink AI model inference

At Current 2024, we announced the open preview of AI model inference in Confluent Cloud for Apache Flink. We’re thrilled to announce that AI model inference is now generally available, making it easier than ever for you to integrate AI into your stream processing workflows. Start off your new year right by seamlessly integrating remote ML models like OpenAI, GCP Vertex AI, AWS SageMaker, and Azure into your real-time data pipelines with familiar SQL syntax, helping you unlock new GenAI use cases.

Tableflow (Open Preview)

We’re also excited to announce that Tableflow, Confluent’s flagship tool for converting Kafka topics and associated schemas to Iceberg tables in a few clicks to feed any data warehouse or data lake, will enter open preview in a couple of weeks. This new release includes Bring Your Own Bucket (BYOB) and integrations with Amazon SageMaker Studio and AWS Glue, enabling seamless data flow between streaming and analytical tools.

Learn more about our new features in Tableflow, as well as our vision for Tableflow to unify your streaming and analytical ecosystem.

Start building with new Confluent Cloud features

Ready to get started? If you haven’t done so already, sign up for a free trial of Confluent Cloud to explore the new features. New sign-ups receive $400 to spend within Confluent Cloud during their first 30 days. Use the code CL60BLOG for an additional $60 of free usage.*

The preceding outlines our general product direction and is not a commitment to deliver any material, code, or functionality. The development, release, timing, and pricing of any features or functionality described may change. Customers should make their purchase decisions based on services, features, and functions that are currently available.

Confluent and associated marks are trademarks or registered trademarks of Confluent, Inc.

Apache®, Apache Kafka®, and Apache Flink® are either registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries. No endorsement by the Apache Software Foundation is implied by using these marks. All other trademarks are the property of their respective owners.

  • Hannah is a product marketer at Confluent. Prior to Confluent, she focused on monetization and growth at TikTok and product launches and messaging for containers services at AWS.

Avez-vous aimé cet article de blog ? Partagez-le !