Show HN: My LLM CLI tool can run tools now, from Python code or plugins

https://news.ycombinator.com/rss Hits: 28
Summary

Large Language Models can run tools in your terminal with LLM 0.26 27th May 2025 LLM 0.26 is out with the biggest new feature since I started the project: support for tools. You can now use the LLM CLI tool—and Python library—to grant LLMs from OpenAI, Anthropic, Gemini and local models from Ollama with access to any tool that you can represent as a Python function. LLM also now has tool plugins, so you can install a plugin that adds new capabilities to whatever model you are currently using. There’s a lot to cover here, but here are the highlights: LLM can run tools now! You can install tools from plugins and load them by name with --tool/-T name_of_tool. You can also pass in Python function code on the command-line with the --functions option. The Python API supports tools too: llm.get_model("gpt-4.1").chain("show me the locals", tools=[locals]).text() Tools work in both async and sync contexts. Here’s what’s covered in this post: Trying it out First, install the latest LLM. It may not be on Homebrew yet so I suggest using pip or pipx or uv: If you have it already, upgrade it. Tools work with other vendors, but let’s stick with OpenAI for the moment. Give LLM an OpenAI API key llm keys set openai # Paste key here Now let’s run our first tool: llm --tool llm_version "What version?" --td Here’s what I get: llm_version is a very simple demo tool that ships with LLM. Running --tool llm_version exposes that tool to the model—you can specify that multiple times to enable multiple tools, and it has a shorter version of -T to save on typing. The --td option stands for --tools-debug—it causes LLM to output information about tool calls and their responses so you can peek behind the scenes. This is using the default LLM model, which is usually gpt-4o-mini. I switched it to gpt-4.1-mini (better but fractionally more expensive) by running: llm models default gpt-4.1-mini You can try other models using the -m option. Here’s how to run a similar demo of the llm_time built-in too...

First seen: 2025-05-27 21:57

Last seen: 2025-05-29 01:03