Quickstart: Build Model Context Protocol (MCP) Agents with Zeta Alpha's Agents SDK
- Dinos Papakostas
- 19 hours ago
- 3 min read
AI agents are only as useful as the tools they can securely and reliably access. The Model Context Protocol (MCP) standardizes how agents talk to external tools and data sources, and Zeta Alpha's Agents SDK offers a fast path from prototype to production, with streaming, tool execution, configuration, and deployment options out of the box.
In this guide, we'll bootstrap a minimal chat agent, expose it over a REST API, and connect it to an MCP server with just a few lines of code.

Getting Started
We will use uv for package and project management.
Please follow the installation steps if you don't have uv installed yet.
# create the project directory
uv init my-project --python 3.12 && cd my-project
# install the Agents SDK (with MCP support)
uv add "zetaalpha.rag-agents[mcp]"
# activate the environment
source .venv/bin/activate
# create the boilerplate for the first agent
rag_agents init agents
If you followed these commands, you should have a project structure in my-project like this:
.
├── .venv
│ └── ...
├── agents
│ ├── __init__.py
│ ├── .gitignore
│ ├── agent_setups.json
│ ├── chat_agent.py
│ └── env
│ └── agent_setups.json
├── .gitignore
├── .python-version
├── main.py
├── pyproject.toml
├── README.md
└── uv.lock
The default chat_agent simply forwards the provided conversation to your LLM provider and streams back the response. During rag_agents init, the SDK will prompt you for an OpenAI API key and configure gpt-4o-mini as the model in agents/agent_setups.json.
The boilerplate code that gets generated in agents/chat_agent.py looks like this:
from typing import AsyncGenerator, List
from zav.agents_sdk import (
ChatAgentClassRegistry,
ChatMessage,
StreamableChatAgent,
)
from zav.agents_sdk.adapters import ZAVChatCompletionClient
@ChatAgentClassRegistry.register()
class ChatAgent(StreamableChatAgent):
agent_name = "chat_agent"
def __init__(self, client: ZAVChatCompletionClient):
self.client = client
async def execute_streaming(
self, conversation: List[ChatMessage]
) -> AsyncGenerator[ChatMessage, None]:
response = await self.client.complete(
messages=conversation,
max_tokens=2048,
stream=True,
)
async for chat_client_response in response:
if chat_client_response.error is not None:
raise chat_client_response.error
if chat_client_response.chat_completion is None:
raise Exception("No response from chat completion client")
yield ChatMessage.from_orm(chat_client_response.chat_completion)
We can expose this agent via a REST API:
rag_agents serve agents
By default, the server runs at localhost:8000.
In a new terminal window, we can query the agent:
curl -sS -X POST "http://localhost:8000/chats/responses?tenant=zetaalpha" \
-H "Content-Type: application/json" \
-d '{
"agent_identifier": "chat_agent",
"conversation": [
{
"sender": "user",
"content": "Convert 32°C to F. Answer with the final result."
}
]
}' | jq
You should see a response like:
{
"agent_identifier": "chat_agent",
"conversation": [
{
"sender": "user",
"content": "Convert 32°C to F. Answer with the final result."
},
{
"sender": "bot",
"content": "32°C is equal to 89.6°F."
}
]
}
Adding MCP Tools
The latest release of the Agents SDK makes it easy to connect to MCP servers.
Update the agent code to inject and use the MCPToolsProvider:
# ... previous imports ...
from zav.agents_sdk.adapters import MCPToolsProvider
@ChatAgentClassRegistry.register()
class ChatAgent(StreamableChatAgent):
agent_name = "chat_agent"
def __init__(
self,
client: ZAVChatCompletionClient,
# Inject the MCP tools provider dependency
tools_provider: MCPToolsProvider,
):
self.client = client
self.tools_provider = tools_provider
async def execute_streaming(
self, conversation: List[ChatMessage]
) -> AsyncGenerator[ChatMessage, None]:
# Discover tools available over MCP
mcp_tools = await self.tools_provider.get_tools()
self.tools_registry.extend(mcp_tools)
response = await self.client.complete(
messages=conversation,
max_tokens=2048,
stream=True,
# Add the tools to the agent and auto-execute them
tools=self.tools_registry,
execute_tools=True,
)
# ... rest of the code as is ...
For this example, we'll use the Time MCP Server, a demo server that lets the agent get the current system time and do timezone conversions.
Update agents/agents_setups.json with an MCP configuration:
{
"agent_identifier": "chat_agent",
"agent_name": "chat_agent",
"llm_client_configuration": {...},
"agent_configuration": {
"mcp_configuration": {
"servers": [
{
"transport": "stdio",
"transport_config": {
"stdio": {
"name": "time",
"command": "uvx",
"args": ["mcp-server-time"]
}
}
}
]
}
}
}
Notes:
uvx will automatically fetch and run the mcp-server-time CLI if it isn't already installed. If you prefer, you can also install it explicitly.
After changing the config and agent, restart the REST API server (rag_agents serve agents).
Now let's try querying the agent again:
curl -sS -X POST "http://localhost:8000/chats/responses?tenant=zetaalpha" \
-H "Content-Type: application/json" \
-d '{
"agent_identifier": "chat_agent",
"conversation": [
{
"sender": "user",
"content": "What time is it now in San Francisco?"
}
]
}' | jq
You should get something like:
{
"agent_identifier": "chat_agent",
"conversation": [
{
"sender": "user",
"content": "What time is it now in San Francisco?"
},
{
"sender": "bot",
"content": "The current time in San Francisco is 6:00 AM on August 8, 2025."
}
]
}
(Output will vary based on your actual time.)
For more information about the Agents SDK, check out our documentation:
With just a few lines of code and minimal configuration, you turned a basic AI chat bot into an MCP-powered agent. If you are interested in building agents on top of your company's internal knowledge, reach out to us at Zeta Alpha - we'd love to help you design, ship, and scale them.
Comments