Building a Deep Research Agent Using MCP-Agent

https://news.ycombinator.com/rss Hits: 6
Summary

Documenting my journey building a general-purpose deep research agent powered by MCP, and sharing the valuable (and sometimes painful) lessons learned along the way.BackgroundMy name is Sarmad Qadri and I'm the creator of the open source project, mcp-agent. My philosophy for agent development in 2025 can be summarized as – MCP is all you need. Or more verbosely: Connect state-of-the-art LLMs to MCP servers, and leverage simple design patterns to let them make tool calls, gather context and make decisions.Over the past few months, I've been asked many times when mcp-agent would support deep research workflows. So I set out to build an open source general purpose agent that can work like Claude Code for complex tasks, including deep research, but also multi-step workflows that require making MCP tool calls.Turns out this is a lot more complex than expected, even if the architectural underpinnings are conceptually simple. This post is about lessons I wanted to share to help others build their own deep research agents.You can find the open-source Deep Orchestrator agent here: https://github.com/lastmile-ai/mcp-agent/src/mcp_agent/workflows/deep_orchestrator/Objective: Deep Research, but with MCPThe first deep research agents started out with access to the internet and filesystem only. The promise of MCP is to dramatically expand that list of tools while adhering to the same architecture. The goal is to be able to do deep research connected to an internal data warehouse, or any data source accessible via an MCP tool or resource. Plus, being able to mutate state by performing tool calls turns the agent from just a research agent to something much more powerful and general-purpose.So following the Deep Research approach, I settled on the following requirements:Core Functionality - Be able to complete complex tasks (including deep research and multi-step workflows) by making multiple MCP tool callsContext Management - Use the outputs from sequential steps to gather context ...

First seen: 2025-09-12 18:02

Last seen: 2025-09-13 00:15