LLM function calls don't scale; code orchestration is simpler, more effective. 20 May, 2025 TL;DR: Giving LLMs the full output of tool calls is costly and slow. Output schemas will enable us to get structured data, so we can let the LLM orchestrate processing with generated code. Tool calling in code is simplifying and effective. One common practice for working with MCP tools calls is to put the outputs from a tool back into the LLM as a message, and ask the LLM for the next step. The hope here is that the model figures out how to interpret the data, and identify the correct next action to take. This can work beautifully when the amount of data is small, but we found that when we tried MCP servers with real-world data, it quickly breaks down. MCP in the real-worldWe use Linear and Intercom at our company. We connected to their latest official MCP servers released last week to understand how they were returning tool calls. It turns out that both servers returned large JSON blobs in their text content. These appeared to be similar to their APIs, with the exception that the text content did not come with any pre-defined schemas. This meant that the only reasonable way to parse them was to have the LLM interpret the data. These JSON blobs are huge! When we asked Linear's MCP to list issues in our project, the tool call defaulted to returning only 50 issues, ~70k characters corresponding to ~25k tokens. The JSON contains lots of id fields that take up many tokens, and are not semantically meaningful. When using Claude with MCPs, it seems that the entire JSON blob gets sent back to the model verbatim. This approach quickly runs into issues. For example, if we wanted to get the AI to sort all the issues by due date and display them, it would need to reproduce all the issues verbatim as output tokens! It'd be slow, costly, and could potentially miss data. The data in our issues also often contained a lot of distracting information: steps to reproduce an issue, errors, maybe...
First seen: 2025-05-21 18:21
Last seen: 2025-05-22 09:24