[Webinar] How to Protect Sensitive Data with CSFLE | Register Today
Happy holidays from Confluent! It’s that time in the quarter again, when we get to share our latest and greatest features on Confluent Cloud.
To start, we’re thrilled to share that Confluent ranked as a leader in The Forrester Wave™: Streaming Data Platforms, Q4 2023, and The Forrester Wave(™): Cloud Data Pipelines, Q4 2023! Forrester strongly endorsed Confluent’s vision to transform data streaming platforms from a “nice-to-have” to a must-have. The reports also validate that Confluent is the leading data streaming platform in the industry and is best suited to build any type of data pipeline.
And we remain committed to improving our data streaming platform to provide our customers with the best possible experience. Last quarter, we made some exciting announcements about our public preview of Confluent Cloud’s fully managed service for Apache Flink®, improvements to Kora Engine, Data Portal on Stream Governance (which is now generally available), and much more! If you haven’t checked these out already, please visit our Q3 Confluent Cloud Launch blog.
This quarter, our updates to Confluent Cloud focus on enabling our customers to realize a seamless experience while using our data streaming platform. With these improvements, we aim to provide our users a more streamlined and secure experience, allowing them to focus on what matters most: leveraging real-time data to drive business outcomes. See an overview of our new features below and read on for more details!
The release of Confluent Cloud Message Browser provides users with a streamlined and intuitive way to search, sort, and view messages across multiple partitions. It offers a refreshed UI that makes it easy to instantly view the messages produced to a topic, sort messages by timestamp, and quickly text-search within the current results. In addition, users can now view new metrics such as production and consumption rate and total messages in a topic (for non-compacted topics).
With Confluent Cloud Message Browser, whether you're a developer, data scientist, or IT professional, you can now efficiently manage your Kafka workloads in the cloud and unlock the full potential of your data. Below are some of the feature details:
Multi-partition message viewing
Instantly view the last 50 messages produced to a topic
Sort from earliest, latest, from a timestamp, or a specific time
View metrics such as production and consumption rate, as well as total messages in a topic (for non-compacted topics)
Quickly text-search within the current results
Confluent's private networking feature allows users to connect their Confluent Cloud environment to their private network securely. This enables data to flow between the two environments without going over the public internet, providing enhanced security and reliability.
Today, we are excited to announce Resource Management Access, a new feature that enables customers with private network clusters to read topics from the Confluent Cloud user interface. This feature enables users to:
Access topic metadata
Access consumer lags and other topic-level metrics
Access and configure Schema Registry
Use Stream Lineage
Create a connector with topic selection
The Confluent Cloud UI provides a variety of use cases for customers, including the ability to create and manage Kafka clusters, configure topics, set access control lists (ACLs) and quotas, monitor cluster health and performance, and visualize data streams using the Stream Designer.
Today, we are excited to announce Global Search Optimization, which led to a ~15x search time delivery improvement, significantly enhancing UI performance and ensuring a seamless user experience.
Connect with Confluent: Since its launch this past July, our Connect with Confluent (CwC) partner program has been growing fast and strong. Through native integrations built within partner applications, we’re further extending the global data streaming ecosystem and bringing Confluent’s fully managed data streams directly to developers’ doorsteps within the tools where they are already working.
In our recent CwC Q4 Announcement, we introduce the latest cohort of partners who have built Confluent integrations and share additional details on our recently announced Data Streaming for AI initiative built together with CwC members. We now offer vector database connectivity with Elastic, MongoDB, Pinecone, Rockset, Weaviate, and Zilliz to provide best-of-breed options for developing real-time AI use cases.
Custom Connectors enhancements: At Kafka Summit London, we announced Custom Connectors that enable dev teams to use their own connector plugins on Confluent Cloud and eliminate the need to manage Connect infrastructure. Since then, we’ve observed customers using custom connectors to integrate home-grown systems and leverage partner-built connectors from Ably, Clickhouse, Neo4j, and more!
We’re pleased to add API, CLI, and Schema Registry support on custom connectors to improve developer workflow and data governance. With these features you can now:
Deploy and manage the lifecycle of custom connectors programmatically, enabling pipeline workflow automation
Leverage Schema Registry with custom connectors to ensure streaming data quality and consistency
Additionally, we are excited to announce that Custom Connectors are now available across all AWS regions on Confluent Cloud.
Google BigQuery V2 Sink Connector: The Google BigQuery V2 sink connector is an updated version of our existing connector that supports OAuth 2.0 when connecting to BigQuery and uses Google's BigQuery Storage Write API that combines batch loading and streaming modes. The API's gRPC protocol is also more efficient than REST, helping you reduce BigQuery ingestion costs. Current users should consider migrating to V2 using the migration guide, and new users should start directly with the V2 connector. Both the legacy and V2 versions of the Google BigQuery sink connector will be supported until further notice.
Schema contexts for connectors: Leverage fully managed connectors in Confluent Cloud environments using schema contexts. A schema context is a grouping of subject names and schema IDs used to separate internal environments like dev, test, and prod within a Confluent Cloud environment and Schema Registry. Now you can specify a schema context when configuring your cloud connector, allowing no issues writing or reading data from Kafka topics, even if the topics share the same name across different internal environments.
Data Portal in Stream Governance now GA: We are excited to announce the general availability of Data Portal in Stream Governance on Confluent Cloud. With this addition, users can safely unlock data and increase developer productivity with a self-service, data-centric interface for discovering, accessing, and enriching real-time data streams flowing across their organizations
Flink observability and metrics: Confluent Cloud for Apache Flink now has three additional metrics (message in, message out, and message behind) available to query directly via the Metrics API, Confluent Cloud UI, and Datadog, providing users with more flexibility and control over monitoring and analyzing the performance of their Flink workloads.
Public IP addresses for external OAuth calls: With this update, customers can avoid manually requesting current in-use IP addresses for external OAuth calls and can programmatically call Confluent Cloud IP Addresses REST API to retrieve a list of public IP addresses to ensure their firewall rules are current.
Additional cloud regions: Confluent Cloud continues to expand its coverage globally, adding four new regions (GCP Paris, GCP Doha, GCP Dammam, and AWS Tel Aviv), bringing the total number of regions for Confluent Cloud to 87!
At Confluent, we understand the importance of providing our customers with a secure, scalable, and efficient event-streaming platform that enables them to leverage real-time data to drive business outcomes. Over the past year, we have launched several new products and features within Confluent Cloud that have helped our customers create and scale their workloads efficiently
We started the year by announcing some exciting features that helped customers build a secure shared services data streaming platform. Some select features included:
Centralized Identity Management using OAuth, a cloud-native authentication standard that allows integration with third-party identity providers
Enhanced RBAC
During Kafka Summit London in May, we announced the KORA engine, the Apache Kafka® engine built for the next level of elasticity, reliability, and performance in the cloud. Other exciting features included:
Our latest launch during Current included updates that enable customers to deliver intelligent, secure, and cost-effective data pipelines. Select features included:
Open preview of Confluent Cloud on Apache Flink, our cloud-native, serverless Flink service
Overall, our key features launched in 2023 continued to empower our customers to drive value from their real-time data by leveraging a secure, scalable, and efficient data streaming platform.
If you haven’t done so already, sign up for a free trial of Confluent Cloud to explore new features. New sign-ups receive $400 to spend within Confluent Cloud during their first 30 days. Use the code CL60BLOG for an additional $60 of free usage.*
Imagine easily enriching data streams and building stream processing applications in the cloud, without worrying about capacity planning, infrastructure and runtime upgrades, or performance monitoring. That's where our serverless Apache Flink® service comes in.
Today, we’re excited to announce the general availability of Data Portal on Confluent Cloud. Data Portal is built on top of Stream Governance, the industry’s only fully managed data governance suite for Apache Kafka® and data streaming.