logo
0
0
WeChat Login
DeepMCPAgent Logo

🤖 DeepMCPAgent

Model-agnostic LangChain/LangGraph agents powered entirely by MCP tools over HTTP/SSE.

Docs Python License Status

Deep MCP Agents on Product Hunt

Discover MCP tools dynamically. Bring your own LangChain model. Build production-ready agents—fast.

📚 Documentation • 🛠 Issues


✨ Why DeepMCPAgent?

  • 🔌 Zero manual tool wiring — tools are discovered dynamically from MCP servers (HTTP/SSE)
  • 🌐 External APIs welcome — connect to remote MCP servers (with headers/auth)
  • 🧠 Model-agnostic — pass any LangChain chat model instance (OpenAI, Anthropic, Ollama, Groq, local, …)
  • DeepAgents (optional) — if installed, you get a deep agent loop; otherwise robust LangGraph ReAct fallback
  • 🛠️ Typed tool args — JSON-Schema → Pydantic → LangChain BaseTool (typed, validated calls)
  • 🧪 Quality bar — mypy (strict), ruff, pytest, GitHub Actions, docs

MCP first. Agents shouldn’t hardcode tools — they should discover and call them. DeepMCPAgent builds that bridge.


🚀 Installation

Install from PyPI:

pip install "deepmcpagent[deep]"

This installs DeepMCPAgent with DeepAgents support (recommended) for the best agent loop. Other optional extras:

  • dev → linting, typing, tests
  • docs → MkDocs + Material + mkdocstrings
  • examples → dependencies used by bundled examples
# install with deepagents + dev tooling pip install "deepmcpagent[deep,dev]"

⚠️ If you’re using zsh, remember to quote extras:

pip install "deepmcpagent[deep,dev]"

🚀 Quickstart

1) Start a sample MCP server (HTTP)

python examples/servers/math_server.py

This serves an MCP endpoint at: http://127.0.0.1:8000/mcp

2) Run the example agent (with fancy console output)

python examples/use_agent.py

What you’ll see:

screenshot


🧑‍💻 Bring-Your-Own Model (BYOM)

DeepMCPAgent lets you pass any LangChain chat model instance (or a provider id string if you prefer init_chat_model):

import asyncio from deepmcpagent import HTTPServerSpec, build_deep_agent # choose your model: # from langchain_openai import ChatOpenAI # model = ChatOpenAI(model="gpt-4.1") # from langchain_anthropic import ChatAnthropic # model = ChatAnthropic(model="claude-3-5-sonnet-latest") # from langchain_community.chat_models import ChatOllama # model = ChatOllama(model="llama3.1") async def main(): servers = { "math": HTTPServerSpec( url="http://127.0.0.1:8000/mcp", transport="http", # or "sse" # headers={"Authorization": "Bearer <token>"}, ), } graph, _ = await build_deep_agent( servers=servers, model=model, instructions="Use MCP tools precisely." ) out = await graph.ainvoke({"messages":[{"role":"user","content":"add 21 and 21 with tools"}]}) print(out) asyncio.run(main())

Tip: If you pass a string like "openai:gpt-4.1", we’ll call LangChain’s init_chat_model() for you (and it will read env vars like OPENAI_API_KEY). Passing a model instance gives you full control.


🤝 Cross-Agent Communication

DeepMCPAgent v0.5 introduces Cross-Agent Communication — agents that can talk to each other without extra servers, message queues, or orchestration layers.

You can now attach one agent as a peer inside another, turning it into a callable tool.
Each peer appears automatically as ask_agent_<name> or can be reached via broadcast_to_agents for parallel reasoning across multiple agents.

This means your agents can delegate, collaborate, and critique each other — all through the same MCP tool interface.
It’s lightweight, model-agnostic, and fully transparent: every peer call is traced like any other tool invocation.


💻 Example

import asyncio from deepmcpagent import HTTPServerSpec, build_deep_agent from deepmcpagent.cross_agent import CrossAgent async def main(): # 1️⃣ Build a "research" peer agent research_graph, _ = await build_deep_agent( servers={"web": HTTPServerSpec(url="http://127.0.0.1:8000/mcp")}, model="openai:gpt-4o-mini", instructions="You are a focused research assistant that finds and summarizes sources.", ) # 2️⃣ Build the main agent and attach the peer as a tool main_graph, _ = await build_deep_agent( servers={"math": HTTPServerSpec(url="http://127.0.0.1:9000/mcp")}, model="openai:gpt-4.1", instructions="You are a lead analyst. Use peers when you need research or summarization.", cross_agents={ "researcher": CrossAgent(agent=research_graph, description="A web research peer.") }, trace_tools=True, # see all tool calls + peer responses in console ) # 3️⃣ Ask a question — the main agent can now call the researcher result = await main_graph.ainvoke({ "messages": [{"role": "user", "content": "Find recent research on AI ethics and summarize it."}] }) print(result) asyncio.run(main())

🧩 Result: Your main agent automatically calls ask_agent_researcher(...) when it decides delegation makes sense, and the peer agent returns its best final answer — all transparently handled by the MCP layer.


💡 Use Cases

  • Researcher → Writer → Editor pipelines
  • Safety or reviewer peers that audit outputs
  • Retrieval or reasoning specialists
  • Multi-model ensembles combining small and large LLMs

No new infrastructure. No complex orchestration. Just agents helping agents, powered entirely by MCP over HTTP/SSE.

🧠 One framework, many minds — DeepMCPAgent turns individual LLMs into a cooperative system.


🖥️ CLI (no Python required)

# list tools from one or more HTTP servers deepmcpagent list-tools \ --http name=math url=http://127.0.0.1:8000/mcp transport=http \ --model-id "openai:gpt-4.1" # interactive agent chat (HTTP/SSE servers only) deepmcpagent run \ --http name=math url=http://127.0.0.1:8000/mcp transport=http \ --model-id "openai:gpt-4.1"

The CLI accepts repeated --http blocks; add header.X=Y pairs for auth:

--http name=ext url=https://api.example.com/mcp transport=http header.Authorization="Bearer TOKEN"

Full Architecture & Agent Flow

1) High-level Architecture (modules & data flow)


2) Runtime Sequence (end-to-end tool call)


3) Agent Control Loop (planning & acting)


4) Code Structure (types & relationships)


These diagrams reflect the current implementation:

  • Model is required (string provider-id or LangChain model instance).
  • MCP tools only, discovered at runtime via FastMCP (HTTP/SSE).
  • Agent loop prefers DeepAgents if installed; otherwise LangGraph ReAct.
  • Tools are typed via JSON-Schema ➜ Pydantic ➜ LangChain BaseTool.
  • Fancy console output shows discovered tools, calls, results, and final answer.

🧪 Development

# install dev tooling pip install -e ".[dev]" # lint & type-check ruff check . mypy # run tests pytest -q

🛡️ Security & Privacy

  • Your keys, your model — we don’t enforce a provider; pass any LangChain model.
  • Use HTTP headers in HTTPServerSpec to deliver bearer/OAuth tokens to servers.

🧯 Troubleshooting

  • PEP 668: externally managed environment (macOS + Homebrew) Use a virtualenv:

    python3 -m venv .venv source .venv/bin/activate
  • 404 Not Found when connecting Ensure your server uses a path (e.g., /mcp) and your client URL includes it.

  • Tool calls failing / attribute errors Ensure you’re on the latest version; our tool wrapper uses PrivateAttr for client state.

  • High token counts That’s normal with tool-calling models. Use smaller models for dev.


📄 License

Apache-2.0 — see LICENSE.


⭐ Stars

Star History Chart

🙏 Acknowledgments

About

No description, topics, or website provided.
455.00 KiB
0 forks0 stars1 branches0 TagREADMEApache-2.0 license
Language
Python97.8%
Dockerfile2.2%