Model-agnostic LangChain/LangGraph agents powered entirely by MCP tools over HTTP/SSE.
Discover MCP tools dynamically. Bring your own LangChain model. Build production-ready agents—fast.
📚 Documentation • 🛠 Issues
BaseTool (typed, validated calls)MCP first. Agents shouldn’t hardcode tools — they should discover and call them. DeepMCPAgent builds that bridge.
Install from PyPI:
pip install "deepmcpagent[deep]"
This installs DeepMCPAgent with DeepAgents support (recommended) for the best agent loop. Other optional extras:
dev → linting, typing, testsdocs → MkDocs + Material + mkdocstringsexamples → dependencies used by bundled examples# install with deepagents + dev tooling
pip install "deepmcpagent[deep,dev]"
⚠️ If you’re using zsh, remember to quote extras:
pip install "deepmcpagent[deep,dev]"
python examples/servers/math_server.py
This serves an MCP endpoint at: http://127.0.0.1:8000/mcp
python examples/use_agent.py
What you’ll see:

DeepMCPAgent lets you pass any LangChain chat model instance (or a provider id string if you prefer init_chat_model):
import asyncio
from deepmcpagent import HTTPServerSpec, build_deep_agent
# choose your model:
# from langchain_openai import ChatOpenAI
# model = ChatOpenAI(model="gpt-4.1")
# from langchain_anthropic import ChatAnthropic
# model = ChatAnthropic(model="claude-3-5-sonnet-latest")
# from langchain_community.chat_models import ChatOllama
# model = ChatOllama(model="llama3.1")
async def main():
servers = {
"math": HTTPServerSpec(
url="http://127.0.0.1:8000/mcp",
transport="http", # or "sse"
# headers={"Authorization": "Bearer <token>"},
),
}
graph, _ = await build_deep_agent(
servers=servers,
model=model,
instructions="Use MCP tools precisely."
)
out = await graph.ainvoke({"messages":[{"role":"user","content":"add 21 and 21 with tools"}]})
print(out)
asyncio.run(main())
Tip: If you pass a string like
"openai:gpt-4.1", we’ll call LangChain’sinit_chat_model()for you (and it will read env vars likeOPENAI_API_KEY). Passing a model instance gives you full control.
DeepMCPAgent v0.5 introduces Cross-Agent Communication — agents that can talk to each other without extra servers, message queues, or orchestration layers.
You can now attach one agent as a peer inside another, turning it into a callable tool.
Each peer appears automatically as ask_agent_<name> or can be reached via broadcast_to_agents for parallel reasoning across multiple agents.
This means your agents can delegate, collaborate, and critique each other — all through the same MCP tool interface.
It’s lightweight, model-agnostic, and fully transparent: every peer call is traced like any other tool invocation.
import asyncio
from deepmcpagent import HTTPServerSpec, build_deep_agent
from deepmcpagent.cross_agent import CrossAgent
async def main():
# 1️⃣ Build a "research" peer agent
research_graph, _ = await build_deep_agent(
servers={"web": HTTPServerSpec(url="http://127.0.0.1:8000/mcp")},
model="openai:gpt-4o-mini",
instructions="You are a focused research assistant that finds and summarizes sources.",
)
# 2️⃣ Build the main agent and attach the peer as a tool
main_graph, _ = await build_deep_agent(
servers={"math": HTTPServerSpec(url="http://127.0.0.1:9000/mcp")},
model="openai:gpt-4.1",
instructions="You are a lead analyst. Use peers when you need research or summarization.",
cross_agents={
"researcher": CrossAgent(agent=research_graph, description="A web research peer.")
},
trace_tools=True, # see all tool calls + peer responses in console
)
# 3️⃣ Ask a question — the main agent can now call the researcher
result = await main_graph.ainvoke({
"messages": [{"role": "user", "content": "Find recent research on AI ethics and summarize it."}]
})
print(result)
asyncio.run(main())
🧩 Result:
Your main agent automatically calls ask_agent_researcher(...) when it decides delegation makes sense, and the peer agent returns its best final answer — all transparently handled by the MCP layer.
No new infrastructure. No complex orchestration. Just agents helping agents, powered entirely by MCP over HTTP/SSE.
🧠 One framework, many minds — DeepMCPAgent turns individual LLMs into a cooperative system.
# list tools from one or more HTTP servers
deepmcpagent list-tools \
--http name=math url=http://127.0.0.1:8000/mcp transport=http \
--model-id "openai:gpt-4.1"
# interactive agent chat (HTTP/SSE servers only)
deepmcpagent run \
--http name=math url=http://127.0.0.1:8000/mcp transport=http \
--model-id "openai:gpt-4.1"
The CLI accepts repeated
--httpblocks; addheader.X=Ypairs for auth:--http name=ext url=https://api.example.com/mcp transport=http header.Authorization="Bearer TOKEN"
These diagrams reflect the current implementation:
- Model is required (string provider-id or LangChain model instance).
- MCP tools only, discovered at runtime via FastMCP (HTTP/SSE).
- Agent loop prefers DeepAgents if installed; otherwise LangGraph ReAct.
- Tools are typed via JSON-Schema ➜ Pydantic ➜ LangChain BaseTool.
- Fancy console output shows discovered tools, calls, results, and final answer.
# install dev tooling
pip install -e ".[dev]"
# lint & type-check
ruff check .
mypy
# run tests
pytest -q
HTTPServerSpec to deliver bearer/OAuth tokens to servers.PEP 668: externally managed environment (macOS + Homebrew) Use a virtualenv:
python3 -m venv .venv
source .venv/bin/activate
404 Not Found when connecting
Ensure your server uses a path (e.g., /mcp) and your client URL includes it.
Tool calls failing / attribute errors
Ensure you’re on the latest version; our tool wrapper uses PrivateAttr for client state.
High token counts That’s normal with tool-calling models. Use smaller models for dev.
Apache-2.0 — see LICENSE.