[ウェビナー] Confluent と AWS を使用して生成 AI スタックを構築 | 今すぐ登録

September | Global

Take Apache Kafka® Global with Confluent

  • Build a globally connected Kafka deployment without the operational complexity
  • Accelerate project delivery times with access to real-time events from anywhere
  • Increase reliability of mission critical applications by minimizing data loss and downtime

Global resources

Aug 24

Global Event Streaming in Confluent Cloud

Take Apache Kafka Global with Confluent

Aug 24 - Aug 25

Kafka Summit 2020

Discover the World of Event Streaming

Aug 24

Confluent Platform 6.0 Announcement

Completing the Event Streaming Platform

Sept 25

Cluster Linking

Introduction to Cluster Linking

Oct 1

What's New in Confluent Platform 6.0

Learn about the new features in Confluent Platform 6.0

Why Global matters

Reduce operational complexity with global Kafka deployments

Problem: Making Kafka globally available means sharing data across fully independent clusters, regardless of where the clusters are hosted (on-prem or across multiple cloud providers) or their distance between one another. This requires replicating topics from clusters across different environments, which can result in additional infrastructure costs, operational burden, and architectural complexity. Confluent allows you to:

  • Scale your event streaming use cases across hybrid-cloud or multi-cloud architectures
  • Improve operational efficiency by joining clusters together with Cluster Linking, regardless of environment or physical distance, without running a seperate system to manage replication
  • Provide a consistent operational experience regardless of distribution, by simplifying and automating the deployment of self-managed Confluent clusters on market-leading Kubernetes distributions:
    • Red Hat OpenShift
    • VMware Tanzu (including PIvotal Container Services)
    • Google Kubernetes Engine (GKE)
    • Amazon Elastic Container Service for Kubernetes (EKS)
    • Azure Kubernetes Service (AKS)
    • Plus any Kubernetes distribution or managed service meeting the Cloud Native Computing Foundation’s (CNCF) conformance standards
Remove data silos and access real-time events from anywhere

Problem: The replication process to share data between two clusters can be difficult because the offsets of messages within a Kafka cluster cannot be preserved and it requires managing a separate system to replicate the data. Consumers may start reading from a different cluster and risk reading the same messages twice or miss messages entirely, resulting in topics with inconsistent data between environments. Furthermore, replicating data between clouds is expensive because providers will charge when data is retrieved, resulting in high data egress fees. Confluent allows you to:

  • Lower the risk of reprocessing or skipping critical messages by ensuring consumers know where to start reading topic-level data when migrating from one environment to another
  • Ensure event data is ubiquitously available by offering asynchronous replication without deploying any additional nodes with Cluster Linking
  • Simplify data management by replicating all event data once before allowing it to be read by unlimited applications in new cloud environment
Minimize data loss with streamlined disaster recovery operations

To replicate a Kafka cluster over to a backup data center using MirrorMaker 2, you need to spin up a separate Kafka Connect cluster to run the replication process, adding complexity to the overall architecture and putting greater management burden on your IT team. Even once the cluster is properly replicated, there are ongoing challenges such as DNS reconfigurations, imprecise offset translations, and siloed workflow burdens. Confluent allows you to:

  • Streamline disaster recovery operations by deploying a single cluster across multiple data centers with Multi-Region Clusters or connecting independent clusters with Cluster Linking
  • Achieve faster recovery time through automated failover without worrying about DNS reconfigurations and offset translations with Multi-Region Clusters
  • Ensure high availability by minimizing disaster recovery complexity with improved recovery objectives with Multi-Region Clusters

Confluent Benefits

Make Kafka Globally Available
Accelerate Project Delivery Times
Increase Reliability of Mission Critical Applications

Quickly connect Kafka regardless of environment or physical distance

Easily replicate events across clusters without the overhead

Ensure high availability by minimizing disaster recovery complexity

Features

  • Preview

    Cluster Linking

    Cluster Linking connects Kafka clusters without spinning up extra nodes, while preserving the offsets

  • Preview

    Cluster Linking

    Cluster Linking connects Kafka clusters without spinning up extra nodes, while preserving the offsets

  • Available

    Multi-Region Clusters

    Multi-Region Clusters deploys a single Kafka cluster across multiple data centers

  • Available

    Operator

    Operator simplifies running Confluent Platform as a cloud-native system on Kubernetes

More Project Metamorphosis releases

Infinite

How do you efficiently scale Kafka storage to make sure you can retain as much data as you need - without pre-provisioning storage you don't use?

Read More

Everywhere

How can you ensure your Kafka infrastructure is flexible enough to adapt to your changing cloud requirements?

Read More

Secure

How do you reduce the risk of security breaches that can result in app downtime or costly data leaks throughout the Kafka operational lifecycle?

Read More

Try it out

Cloud
Fully managed service

Deploy in minutes. Pay as you go. Try a serverless Kafka experience.

Platform
Self-managed software

Experience the power of our enterprise-ready platform through our free download.

*Receive $200 off your bill each calendar month for the first three months