[Webinar] Unlock Data Value Framework for Data Products | Register Now
Model Context Protocol (MCP), introduced by Anthropic, is a new standard that simplifies artificial intelligence (AI) integrations by providing a secure, consistent way to connect AI agents with external tools and data sources.
When we saw MCP’s potential, we immediately started exploring how we could bring real-time data streaming into the mix. With our long history of supporting open source and open standards, building an MCP server was a natural fit.
With this server, we achieved two big things:
We gave AI agents direct access to fresh, real-time data, ensuring that they always operate on the latest information.
We made managing Confluent easier with natural language so that users can configure topics, execute Flink SQL, and interact with their data infrastructure without wrestling with complex commands.
This blog post dives into what we built, how it works, and why real-time data is essential for the future of AI agents.
Let’s get into it.
Today, integrating AI systems with different platforms often requires custom-built solutions that are complex, time-consuming, and difficult to maintain. Developers frequently need to write bespoke code to pull data into AI agents, manually integrating APIs, databases, and external tools. Existing frameworks such as LangChain and LlamaIndex provide mechanisms for these integrations, but they often require one-off connectors for each system, creating fragile, hard-to-scale solutions.
MCP solves this by providing a universal standard that simplifies these connections. Instead of reinventing integrations for every tool or database, developers can use MCP as a standardized bridge between AI models and external data sources.
At its core, MCP standardizes how agents retrieve and interact with external data. It provides a structured way to facilitate these interactions for AI systems:
Pull customer records from a database
Retrieve documents from cloud storage
Execute workflows based on real-time inputs
By establishing a common protocol, MCP eliminates much of the complexity and redundancy in building AI-powered workflows. This allows developers to focus on agent logic rather than the underlying integration challenges.
MCP operates on a client-server architecture, where AI-powered applications (clients such as Claude Desktop) interact with MCP servers to access external data, tools, and structured prompts. This structured approach ensures that agents can retrieve real-time information, execute predefined actions, and maintain secure, consistent interactions with external systems.
At a high level, MCP works by defining three key components within its servers:
Tools – Functions that AI agents can call to perform specific actions, such as making API requests or executing commands (e.g., querying a weather API).
Resources – Data sources that AI agents can access, similar to REST API endpoints. These provide structured data without performing additional computation.
Prompts – Predefined templates that guide AI models in optimally using tools and resources.
MCP servers act as data gateways, exposing these tools, resources, and prompts to AI applications through a standardized interface. Clients, typically AI-powered systems like assistants or automation tools, communicate with these servers using JSON-RPC 2.0, a lightweight messaging protocol that ensures secure, two-way communication.
In practice, an AI-powered client will:
Connect to an MCP server – The client initiates a connection to discover available tools, resources, and prompts.
Query or invoke tools/resources – Based on user input, the client requests data or executes predefined functions via the MCP server.
Process and return responses – The AI agent processes the retrieved data, applies contextual reasoning, and delivers an appropriate response or action.
This structured approach reduces integration complexity, improves security and governance, and ensures that agents operate with accurate, up-to-date information.
At Confluent, we’ve spent years helping companies integrate real-time data into their systems, and MCP presents an opportunity to extend that to AI agents in a standardized way.
AI systems need access to both real-time and historical data to be effective. Confluent already provides 120+ pre-built connectors, making it easy to stream data from databases, event systems, and software-as-service (SaaS) applications. By adding MCP support, we give agents direct access to these data sources without requiring one-off integrations.
This reduces the complexity of managing separate data pipelines and ensures that AI-driven workflows operate on the freshest available information.
We built an MCP server that connects directly to Confluent, allowing agents to interact with real-time data using natural language.
Instead of manually configuring topics, writing Flink SQL, or managing connectors, users can issue commands in plain language, and the server will translate them into executable actions.
Manage Kafka topics (create, delete, update)
Produce and consume messages
Execute Flink SQL queries
Manage and configure connectors
Tag and organize topics
The server makes it easier to connect your agents to data flowing through and helps automate configuration and management.
Check it out here: https://github.com/confluentinc/mcp-confluent
Each tool in the Confluent MCP server is defined with a name, description, and input parameters, allowing agents to interact with Confluent resources in a structured way.
The current implementation includes 20 built-in tools, and adding new functionality is straightforward: Just define a new tool with its schema and execution logic. This makes it easy to expand the system as new use cases arise.
With MCP, agents can interact with Confluent using natural language, removing the need for manual configurations.
To showcase this, we integrated the Confluent MCP server with Goose, an open source AI framework from Block. Goose is designed as an on-machine AI agent that helps developers automate complex tasks. Like the Claude Desktop client, it natively supports MCP, making it easy to connect with external systems.
Here’s a walk-through of how the Goose Client interacts with Confluent through our MCP server.
User: List all the Kafka topics.
User: Can you save our conversation history?
Conversation history has been saved to the “claude-conversations” topic.
User: Help me sample some data from “cool” and “hello” topics.
User: Let’s create a tag called PII (personally identifiable information). Analyze the topics we’ve just created and tag them accordingly if they contain PII.
User: Let’s change the retention time for topics marked with PII to “1 day.”
These examples show how MCP simplifies interactions with Confluent by allowing natural language-driven data operations. Instead of writing CLI commands or API calls, users can manage Kafka topics, inspect data, and enforce policies with English.
Each tool in the Confluent MCP server is defined with a name, description, and input parameters, allowing agents to interact with Confluent resources in a structured way. If you wish to add new functionality that is unsupported, you can do this in three simple steps:
Incorporate a new value into the ToolName
enum.
Incorporate the new tool into the handlers map within the ToolFactory
class by associating it with the class handler that will be created. For this instance, we will be developing a class named CreateTopicsHandler
.
Develop a class that extends the BaseToolHandler
abstract class. This requires the implementation of two methods:
getToolConfig
: This defines the tool and its appearance to MCP clients.
handle
: This method is called when the tool is invoked.
By following these steps, you can easily add new tools to the MCP server, allowing AI agents to interact more effectively with Confluent resources. This flexibility ensures that the server can adapt to evolving requirements and support a growing range of use cases, making it a valuable asset for organizations looking to integrate AI with real-time data.
AI agents are only as good as the data they have access to.
If they’re making decisions based on stale, outdated information, their insights quickly lose relevance. To be effective, agents need real-time access to data across disparate systems, ensuring that they’re always working with the most up-to-date context.
That’s where MCP and our Confluent MCP server come in.
By providing a standardized way for agents to connect to real-time data streams, structured datasets, and external tools, we eliminate the need for custom, one-off integrations. Agents can retrieve live data, execute actions, and make decisions based on the latest available information, without developers having to write bespoke code for every data source.
With Tableflow, we can take this even further by unifying real-time and stored data.
Agents can:
Perform federated search across vector stores, databases, and data lakes
Access both historical datasets and live streaming data from a single interface
Query Apache Iceberg™️ and Delta Tables alongside event streams, ensuring seamless access to structured and semi-structured data
This means AI agents are no longer limited to a snapshot of the past. Instead they can react to live events, correlate them with stored knowledge, and deliver timely, informed insights.
Because Confluent already provides 120+ pre-built connectors, AI agents can interconnect with enterprise systems, SaaS platforms, databases, and event streams without additional development effort. The MCP server abstracts away the complexity, making all this data immediately accessible through a common interface.
With this approach, developers don’t need to build and maintain fragile, custom pipelines. Instead they can focus on designing intelligent agents.
If you're building AI-driven applications and want to see how MCP can simplify your workflows, try out our MCP server and let us know what you think.
Check it out on GitHub, experiment with it, and share your feedback. We’d love to hear from you!
Apache®, Apache Kafka®, and Apache Flink® are trademarks of the Apache Software Foundation in the United States and/or other countries. No endorsement by the Apache Software Foundation is implied by using these marks. All other trademarks are the property of their respective owners.
Learn how to delete topics in Apache Kafka safely and efficiently. Explore step-by-step instructions, best practices, and important considerations for managing Kafka topics.