How much do language models memorize?

https://news.ycombinator.com/rss Hits: 2
Summary

[Submitted on 30 May 2025 (v1), last revised 2 Jun 2025 (this version, v2)] Title:How much do language models memorize? View a PDF of the paper titled How much do language models memorize?, by John X. Morris and 7 other authors View PDF HTML (experimental) Abstract:We propose a new method for estimating how much a model ``knows'' about a datapoint and use it to measure the capacity of modern language models. Prior studies of language model memorization have struggled to disentangle memorization from generalization. We formally separate memorization into two components: \textit{unintended memorization}, the information a model contains about a specific dataset, and \textit{generalization}, the information a model contains about the true data-generation process. When we completely eliminate generalization, we can compute the total memorization, which provides an estimate of model capacity: our measurements estimate that GPT-style models have a capacity of approximately 3.6 bits per parameter. We train language models on datasets of increasing size and observe that models memorize until their capacity fills, at which point ``grokking'' begins, and unintended memorization decreases as models begin to generalize. We train hundreds of transformer language models ranging from $500K$ to $1.5B$ parameters and produce a series of scaling laws relating model capacity and data size to membership inference. Submission history From: John Morris [view email] [v1] Fri, 30 May 2025 17:34:03 UTC (6,686 KB) [v2] Mon, 2 Jun 2025 14:13:41 UTC (3,282 KB)

First seen: 2025-06-04 00:43

Last seen: 2025-06-04 02:43