ThinkMesh ThinkMesh is a python library for running diverse reasoning paths in parallel, scoring them with internal confidence signals, reallocates compute to promising branches, and fuses outcomes with verifiers and reducers. It works with offline Hugging Face Transformers and vLLM/TGI, and with hosted APIs. Note: This is still in it's early development phase and breaking changes can sometimes occur Highlights Parallel reasoning with DeepConf‑style confidence gating and budget reallocation Offline‑first with Transformers; optional vLLM/TGI for server‑side batching Hosted adapters for OpenAI and Anthropic Async execution with dynamic micro‑batches Reducers (majority/judge) and pluggable verifiers (regex/numeric/custom) Caching, metrics, and JSON traces Install git clone https://github.com/martianlantern/thinkmesh.git cd thinkmesh pip install -e " .[dev,transformers] " Quickstart: Offline DeepConf from thinkmesh import think , ThinkConfig , ModelSpec , StrategySpec cfg = ThinkConfig ( model = ModelSpec ( backend = "transformers" , model_name = "Qwen2.5-7B-Instruct" , max_tokens = 256 , temperature = 0.7 , seed = 42 , extra = { "device" : "cuda:0" }), strategy = StrategySpec ( name = "deepconf" , parallel = 8 , max_steps = 2 , deepconf = { "k" : 5 , "tau_low" : - 1.25 , "tau_ent" : 2.2 , "realloc_top_p" : 0.4 }), reducer = { "name" : "majority" }, budgets = { "wall_clock_s" : 20 , "tokens" : 4000 }, ) ans = think ( "Show that the product of any three consecutive integers is divisible by 3." , cfg ) print ( ans . content , ans . confidence ) Quickstart: OpenAI Self‑Consistency import os os . environ [ "OPENAI_API_KEY" ] = "sk-..." from thinkmesh import think , ThinkConfig , ModelSpec , StrategySpec cfg = ThinkConfig ( model = ModelSpec ( backend = "openai" , model_name = "gpt-4o-mini" , max_tokens = 256 , temperature = 0.6 ), strategy = StrategySpec ( name = "self_consistency" , parallel = 6 , max_steps = 1 ), reducer = { "name" : "majority" }, budgets = { "wall_clock_...
First seen: 2025-08-24 05:58
Last seen: 2025-08-24 17:10