Kafka in the Cloud: Why it’s 10x better with Confluent | Find out more
Change data capture is a popular method to connect database tables to data streams, but it comes with drawbacks. The next evolution of the CDC pattern, first-class data products, provide resilient pipelines that support both real-time and batch processing while isolating upstream systems...
Confluent Cloud Freight clusters are now Generally Available on AWS. In this blog, learn how Freight clusters can save you up to 90% at GBps+ scale.
Learn how to contribute to open source Apache Kafka by writing Kafka Improvement Proposals (KIPs) that solve problems and add features! Read on for real examples.
Read this Data in Motion Tour recap to get highlights and key insights from Singaporean business leaders leveraging data streaming in their organizations.
Confluent’s Create Embeddings Action for Flink helps you generate vector embeddings from real-time data to create a live semantic layer for your AI workflows.
In our latest Confluent Champion, staff solutions engineer Maria Berinde-Tampanariu shares how Confluent fosters a culture of motivation and growth.
Allium provides real-time, accessible blockchain data for analytics and business teams with the help of data streaming. Learn how here.
Effective supply chain management relies on the ready availability of well-governed, real-time data. Learn how Confluent facilitates supply chain optimization.
The rise of agentic AI has fueled excitement around agents that autonomously perform tasks, make recommendations, and execute complex workflows. This blog post details the design and architecture of PodPrep AI, an AI-powered research assistant that helps the author prepare for podcast interviews.
Confluent Cloud 2024 Q4 adds private networking and mTLS authentication, follower fetching, Flink updates, WarpStream features to support migration and governance, and more!
Discover how Confluent has transformed data management for Kmart and IAG in Australia and New Zealand with its real-time data streaming platform.
GenAI thrives on real-time contextual data: In a modern system, LLMs should be designed to engage, synthesize, and contribute, rather than to simply serve as queryable data stores.
In building the next generation of web agents, we need the simplest, fastest way to extract web data at scale for production use cases.
Continuing issues with hallucinations, the increasing independence of agentic AI systems, and the greater usage of dynamic data sources, are three AI trends you may want to monitor in 2025.
In this final part of the blog series, we bring it all together by exploring data streaming platforms (DSPs), event-driven architecture (EDA), and real-time data processing to scale AI-powered solutions across your organization.
In Part 2 of the series, we take things a step further by enhancing GenAI with the tools it needs to deliver smarter, more relevant responses. We introduce retrieval-augmented generation (RAG) and vector databases (VectorDBs), key technologies that provide LLMs with the context they need.
This blog series explores how technologies like generative AI, RAG, VectorDBs, and DSPs can work together to provide the freshest and most actionable data. Part 1 lays the foundation for understanding how data fuels AI, and why having the right data at the right time is essential for success.