A new study appears to lend credence to allegations that OpenAI trained at least some of its AI models on copyrighted content. OpenAI is embroiled in suits brought by authors, programmers, and other rights-holders who accuse the company of using their works — books, codebases, and so on — to develop its models without permission. OpenAI has long claimed a fair use defense, but the plaintiffs in these cases argue that there isn’t a carve-out in U.S. copyright law for training data. The study, which was co-authored by researchers at the University of Washington, the University of Copenhagen, and Stanford, proposes a new method for identifying training data “memorized” by models behind an API, like OpenAI’s. Models are prediction engines. Trained on a lot of data, they learn patterns — that’s how they’re able to generate essays, photos, and more. Most of the outputs aren’t verbatim copies of the training data, but owing to the way models “learn,” some inevitably are. Image models have been found to regurgitate screenshots from movies they were trained on, while language models have been observed effectively plagiarizing news articles. The study’s method relies on words that the co-authors call “high-surprisal” — that is, words that stand out as uncommon in the context of a larger body of work. For example, the word “radar” in the sentence “Jack and I sat perfectly still with the radar humming” would be considered high-surprisal because it’s statistically less likely than words such as “engine” or “radio” to appear before “humming.” The co-authors probed several OpenAI models, including GPT-4 and GPT-3.5, for signs of memorization by removing high-surprisal words from snippets of fiction books and New York Times pieces and having the models try to “guess” which words had been masked. If the models managed to guess correctly, it’s likely they memorized the snippet during training, concluded the co-authors. An example of having a model “guess” a high-surprisal word.Image ...
First seen: 2025-04-04 19:02
Last seen: 2025-04-07 14:19