Ahorra un 25 % (o incluso más) en tus costes de Kafka | Acepta el reto del ahorro con Kafka de Confluent
Take Apache Kafka Global with Confluent
Learn about the new features in Confluent Platform 6.0
Problem: Making Kafka globally available means sharing data across fully independent clusters, regardless of where the clusters are hosted (on-prem or across multiple cloud providers) or their distance between one another. This requires replicating topics from clusters across different environments, which can result in additional infrastructure costs, operational burden, and architectural complexity. Confluent allows you to:
Problem: The replication process to share data between two clusters can be difficult because the offsets of messages within a Kafka cluster cannot be preserved and it requires managing a separate system to replicate the data. Consumers may start reading from a different cluster and risk reading the same messages twice or miss messages entirely, resulting in topics with inconsistent data between environments. Furthermore, replicating data between clouds is expensive because providers will charge when data is retrieved, resulting in high data egress fees. Confluent allows you to:
To replicate a Kafka cluster over to a backup data center using MirrorMaker 2, you need to spin up a separate Kafka Connect cluster to run the replication process, adding complexity to the overall architecture and putting greater management burden on your IT team. Even once the cluster is properly replicated, there are ongoing challenges such as DNS reconfigurations, imprecise offset translations, and siloed workflow burdens. Confluent allows you to:
Quickly connect Kafka regardless of environment or physical distance
Easily replicate events across clusters without the overhead
Ensure high availability by minimizing disaster recovery complexity
Cluster Linking connects Kafka clusters without spinning up extra nodes, while preserving the offsets
Cluster Linking connects Kafka clusters without spinning up extra nodes, while preserving the offsets
Multi-Region Clusters deploys a single Kafka cluster across multiple data centers
Operator simplifies running Confluent Platform as a cloud-native system on Kubernetes
How do you efficiently scale Kafka storage to make sure you can retain as much data as you need - without pre-provisioning storage you don't use?
How can you ensure your Kafka infrastructure is flexible enough to adapt to your changing cloud requirements?
How do you reduce the risk of security breaches that can result in app downtime or costly data leaks throughout the Kafka operational lifecycle?