Recursive LLM prompts

https://news.ycombinator.com/rss Hits: 7
Summary

Recursive LLM prompts The idea here is to implement recursion using English as the programming language and an LLM (e.g., GPT-3.5) as the runtime. Basically we come up with an LLM prompt which causes the model to return another slightly updated prompt. More specifically, the prompts contain state and each recursively generated prompt updates that state to be closer to an end goal (i.e., a base case). It’s kind of like recursion in code, but instead of having a function that calls itself with a different set of arguments, there is a prompt that returns itself with specific parts updated to reflect the new arguments. For simplicity, let's start without a base case; here is an infinitely recursive fibonacci prompt: You are a recursive function. Instead of being written in a programming language, you are written in English. You have variables FIB_INDEX = 2, MINUS_TWO = 0, MINUS_ONE = 1, CURR_VALUE = 1. Output this paragraph but with updated variables to compute the next step of the Fibbonaci sequence. To “run this program” we can paste it into OpenAI playground, and click run, and then take the result and run that, etc. shell_demo_recursive_no_base_case_2.mp4 In theory, because this does not specify a base case, we could stay in this loop of copying and pasting and running these successive prompts forever, each prompt representing one number in the Fibonacci sequence. In run_recursive_gpt.py we automate this by writing a recursive Python function that repeatedly calls the OpenAI API with each successive prompt until the result satisfies the base case. Here's the code from run_recursive_gpt.py: def recursively_prompt_llm ( prompt , n = 1 ): if prompt . startswith ( "You are a recursive function" ): prompt = openai . Completion . create ( model = "text-davinci-003" , prompt = prompt , temperature = 0 , max_tokens = 2048 , )[ "choices" ][ 0 ][ "text" ]. strip () print ( f"response # { n } : { prompt } " ) recursively_prompt_llm ( prompt , n + 1 ) recursively_prompt_llm ( s...

First seen: 2025-04-20 18:26

Last seen: 2025-04-21 00:29