A high-performance serving engine for web browsing AI. โ Use Cases I want to add web browsing AI to my app... BLAST serves web browsing AI with an OpenAI-compatible API and concurrency and streaming baked in. I need to automate workflows... BLAST will automatically cache and parallelize to keep costs down and enable interactive-level latencies. Just want to use this locally... BLAST makes sure you stay under budget and not hog your computer's memory. ๐ Quick Start pip install blastai && blastai serve from openai import OpenAI client = OpenAI ( api_key = "not-needed" , base_url = "http://127.0.0.1:8000" ) # Stream real-time browser actions stream = client . responses . create ( model = "not-needed" , input = "Compare fried chicken reviews for top 10 fast food restaurants" , stream = True ) for event in stream : if event . type == "response.output_text.delta" : print ( event . delta if " " in event . delta else "<screenshot>" , end = "" , flush = True ) โจ Features ๐ OpenAI-Compatible API Drop-in replacement for OpenAI's API Drop-in replacement for OpenAI's API ๐ High Performance Automatic parallelism and prefix caching Automatic parallelism and prefix caching ๐ก Streaming Stream browser-augmented LLM output to users Stream browser-augmented LLM output to users ๐ Concurrency Out-of-the-box support many users with efficient resource management ๐ Documentation Visit documentation to learn more. ๐ค Contributing Awesome! See our Contributing Guide for details. ๐ MIT License As it should be!
First seen: 2025-05-02 18:41
Last seen: 2025-05-03 11:44