LLM Daydreaming

https://news.ycombinator.com/rss Hits: 9
Summary

Proposal & discussion of how default mode networks for LLMs are an example of missing capabilities for search and novelty in contemporary AI systems. Despite impressive capabilities, large language models have yet to produce a genuine breakthrough. The puzzle is why. A reason may be that they lack some fundamental aspects of human thought: they are frozen, unable to learn from experience, and they have no “default mode” for background processing, a source of spontaneous human insight. To solve this, I propose a day-dreaming loop (DDL): a background process that continuously samples pairs of concepts from memory. A generator model explores non-obvious links between them, and a critic model filters the results for genuinely valuable ideas. These discoveries are fed back into the system’s memory, creating a compounding feedback loop where new ideas themselves become seeds for future combinations. The cost of this process—a “daydreaming tax”—would be substantial, given the low hit rate for truly novel connections. This expense, however, may be the necessary price for innovation. It would also create a moat against model distillation, as valuable insights emerge from the combinations no one would know to ask for. The strategic implication is counterintuitive: to make AI cheaper and faster for end users, we might first need to build systems that spend most of their compute on this “wasteful” background search. This suggests a future where expensive, daydreaming AIs are used primarily to generate proprietary training data for the next generation of efficient models, offering a path around the looming data wall. …I feel I am nibbling on the edges of this world when I am capable of getting what Picasso means when he says to me—perfectly straight-facedly—later of the enormous new mechanical brains or calculating machines: “But they are useless. They can only give you answers.” William Fifield, “Pablo Picasso—A Composite Interview” (196461ya) Dwarkesh Patel asks why no LLM has...

First seen: 2025-07-16 07:06

Last seen: 2025-07-16 15:10