Développez l'apprentissage automatique prédictif avec Flink | Atelier du 18 déc. | S'inscrire
The Q3 Confluent Cloud Launch comes to you from Current 2023, where data streaming industry experts have come together to share insights into the future of data streaming and new areas of innovation. This year, we’re introducing Confluent Cloud’s fully managed service for Apache Flink®, improvements to Kora Engine, how AI and streaming work together, and much more.
This year's Current event attracted over 3,500 attendees, both in-person and virtual, and featured 100+ learning sessions by industry experts. Confluent's CEO, Jay Kreps, kicked off the event with a keynote on "Streaming into the future," exploring data streaming platform evolution. NASA's Joe Foster discussed data streaming at NASA. Day 1 included deep dive sessions by Confluent leadership, Confluent Cloud announcements, demos on data governance and Flink stream processing, and discussions with Warner Bros. Discovery and NOTION on their use cases. Day 2 brought updates on Flink, Kafka, and more learning sessions
Shaun Clowes discussed Data Streaming Platforms, Julie Price introduced Kora engine. Girish Rao, Sharmey Shah, Mike Agnich, Greg DeMichillie covered Stream Governance and innovations. Konstantine tackled Apache Flink for Stream Processing. CMO Stephanie Buscemi and Rachel Groberman explored AI in data streaming, followed by a GenAI demo by Andrew Sellers and Rachel.
Don't forget to visit our blog for a brief overview of the highlights from Current 2023. You can also access the complete Day 1 and Day 2 keynote sessions by clicking here!
With this, we are excited to push forward the vision set by our speakers for the Q3 launch in the latest release of Confluent Cloud. These new features help enable customers to easily and securely build, deploy, and consume intelligent data pipelines.
Here is an overview of the latest features—read on for more details.
Join us to see the new features in action in the Q3 Launch demo webinar.
Building on the success of our Flink Early Access Program last quarter, we are thrilled to announce the open preview of Confluent Cloud for Apache Flink® in select regions on AWS, with more regions and clouds becoming available during this preview phase. Check out the Flink quick start to see how you can try the industry's only cloud-native, serverless Flink service today!
Stream processing plays a critical role in the infrastructure stack for data streaming, enabling developers to filter, join, aggregate, and transform data streams on the fly to make them more usable for downstream systems and applications. Apache Flink is emerging as the de facto standard among stream processing frameworks because of its powerful runtime, unified processing model for batch and streaming, and active developer community.
However, self-managing Flink (like Apache Kafka®) is very challenging and resource-intensive due to its operational complexity, steep learning curve, and high costs for in-house support. This burden results in teams spending more time on low-level infrastructure management than building new products and features for their organizations.
With the open preview of Confluent Cloud for Apache Flink, you can easily process data in-flight to create high-quality, reusable streams delivered anywhere in real time.
Confluent's fully managed Flink service enables you to:
Effortlessly filter, join, and enrich your data streams with Flink, the de facto standard for stream processing
Enable high-performance and efficient stream processing at any scale, without the complexities of infrastructure management
Experience Kafka and Flink as a unified platform, with fully integrated monitoring, security, and governance
Confluent Cloud for Apache Flink is currently in preview, meaning it is meant for testing and experimentation purposes. For more information about the Flink service, read the deep dive blog post.
Kora, our cloud-native Apache Kafka engine, recently earned the top prize at the 2023 Very Large Data Bases (VLDB) Conference. Kora's automated operations and serverless abstractions enable us to offer a truly fully managed service with the elasticity, resiliency, and performance that customers have come to expect. It also enables us to efficiently manage tens of thousands of Confluent Cloud clusters (and counting), resulting in cost savings that we are excited to pass along to our customers.
As part of this quarterly launch, we are introducing Enterprise clusters, a secure, cost-effective, and serverless Kafka cluster powered by the Kora engine. By introducing Enterprise clusters, we have taken a step further in expanding our serverless capabilities to those who need private networking, starting with AWS PrivateLink—a single regional network interface to securely access resources in Confluent Cloud, simplifying network administration. With Enterprise clusters, you can:
Easily and securely connect your private environments to Confluent Cloud with simplified, reusable setups and secure network isolation
Optimize resource and cost efficiency with auto-scaling clusters to meet any demand
Eliminate manual sizing, provisioning, and ongoing management with automated operations and intelligent data tiering powered by our Kora Engine
Private networking used to only be available through our Dedicated clusters. Now with Enterprise, you can establish secure and direct communication between VPCs and Confluent Cloud without internet exposure, while enjoying instant provisioning and auto-scaling to scale up for spikes in demand and then scale down when not needed. Enterprise clusters are also equipped with the industry's most comprehensive 99.99% uptime SLA, infinite storage, and battle-tested connectivity, governance, and security tools, consistent with the experience you get from other Confluent Cloud clusters.
Confluent Cloud now offers storage for 20% less, effective October 1, 2023. Teams can still retain all real-time and historical events without limits, helping power more streaming use cases, including event sourcing, artificial intelligence, and stream processing, at a more affordable price. Try Confluent Cloud for free in minutes.
We’re improving our Confluent Terraform provider with support for HashiCorp Sentinel Integration—a policy-as-code framework that allows organizations to define and enforce custom policies to ensure infrastructure and application deployments comply with specific rules and regulations. By integrating Sentinel with Confluent Terraform, customers can enforce policies at every stage of the infrastructure lifecycle, from provisioning to ongoing management.
The key benefits include:
Policy-driven governance: Establish policy-as-code rules that govern the configuration and access control of Confluent Cloud resources. This ensures that infrastructure and application deployments adhere to predefined standards, minimizing potential security risks and ensuring compliance with internal policies.
Customizable policies: Create and tailor policies specific to the business requirements, accommodating specific security and compliance needs.
Centralized policy management: Enable centralized policy management across different Confluent Cloud resources managed through Confluent Terraform, streamlining policy updates and providing a unified view of security and compliance status.
Apart from the introduction of Hashicorp integration, we have also added a few other features to our Terraform update:
Data Catalog Support that provides customers with an auditable, automated way to deploy business metadata and tag information on top of Confluent resources.
Resource Importer that allows customers to seamlessly switch to the Confluent Terraform provider by migrating thousands of Confluent Cloud resources.
Coming soon to our Stream Governance suite, Data Portal provides a robust, flexible interface that redefines the way you discover, access, and leverage your data streams flowing through Confluent Cloud. Data Portal builds on our Stream Catalog capabilities and brings our vision for reusable, high-quality, and trusted data products to the forefront of our product. By getting the data you need faster through the portal, you can accelerate the time it takes to ship new products and features.
With Data Portal, you can:
Search, discover, and explore existing topics, tags, and metadata across your organization with end-to-end visibility to choose the data most relevant to your projects
Seamlessly and securely request access to data streams and trigger an approval workflow that connects you with the data owner, all within the Confluent Cloud UI
Easily build and manage data products to power streaming pipelines and applications by understanding, accessing, and enriching existing data streams
Note: Data Portal is coming soon to our Stream Governance suite. Stay tuned for updates on its availability.
In today's interconnected world, data replication and disaster recovery (DR) are critical components of an organization's infrastructure. Cluster Linking is a powerful tool that facilitates offset-preserving replication of topics and related metadata from a source cluster to a destination cluster, and has become the go-to solution for many businesses with clusters spread across different regions. However, as the demand for seamless failover capabilities grows, we are making our cluster linking capabilities better to meet full disaster recovery requirements.
The new Bidirectional Cluster Linking offers solutions for two primary use cases by sending data to two clusters:
Active/Active architectures (where both clusters receive producers): simultaneous processing and data synchronization between multiple clusters or systems to achieve high availability and fault tolerance
Disaster recovery (both Active/Passive and Active/Active): Active/Passive and Active/Active disaster recovery are strategies involving standby environments or continuous synchronization for high availability, data integrity, and minimal downtime, protecting businesses from disruptions and data loss
Benefits include:
Flexible Data Exchange: Bi-directional links enable both source and destination roles for topics, supporting versatile data synchronization
Simplified Connection and Security: Bi-directional links offer options for outbound and inbound connections, streamlining security configuration
Efficient Consumer Offset Migration: Bi-directional links facilitate seamless consumer migration with retained offsets for consistent data processing
Explore the comprehensive documentation available for more in-depth information and insights.
Cloud Audit Logs for Kafka Produce & Consume: In this launch, we're introducing enhancements for Cloud Audit logs for Kafka Product and Consume, allowing infosec and security administrators to access audit trails for crucial actions. These enhancements offer flexible configuration options, granular visibility, and cost-effective decision-making benefits for security, infrastructure, infosec, and procurement administrators.
Cluster Linking with AWS PrivateLink: AWS PrivateLink for Confluent Cloud services provides secure and direct communication between VPCs and Confluent Cloud, enhancing data privacy and security while mitigating network threats, making it crucial for safeguarding Confluent Cloud communication within an AWS environment. This service offers enhanced network-level security, seamless Cluster Linking, and simplified setup for efficient data exchange.
Ready to get started? Remember to register for the Q3 ʼ23 Launch demo webinar on October 17, where you’ll learn firsthand from our product managers how to put these new features to use.
And if you haven’t done so already, sign up for a free trial of Confluent Cloud to explore new features—no credit card required. New sign-ups receive $400 to spend within Confluent Cloud during their first 30 days. Use the code CL60BLOG
for an additional $60 of free usage.*
As of today, Confluent Cloud for Apache Flink® is available for preview in select regions on AWS. In this post, learn how we’ve re-architected Flink as a cloud-native service on Confluent Cloud.
Dive into Flink SQL, a powerful data processing engine that allows you to process and analyze large volumes of data in real time. We’ll cover how Flink SQL relates to the other Flink APIs and showcase some of its built-in functions and operations with syntax examples.