On Monday, OpenAI launched a new family of AI models, GPT-4.1, which the company said outperformed some of its existing models on certain tests, particularly benchmarks for programming. However, GPT-4.1 didn’t ship with the safety report that typically accompanies OpenAI’s model releases, known as a model or system card. As of Tuesday morning, OpenAI had yet to publish a safety report for GPT-4.1 — and it seems it doesn’t plan to. In a statement to TechCrunch, OpenAI spokesperson Shaokyi Amdo said that “GPT-4.1 is not a frontier model, so there won’t be a separate system card released for it.” It’s fairly standard for AI labs to release safety reports showing the types of tests they conducted internally and with third-party partners to evaluate the safety of particular models. These reports occasionally reveal unflattering information, like that a model tends to deceive humans or is dangerously persuasive. By and large, the AI community perceives these reports as good-faith efforts by AI labs to support independent research and red teaming. But over the past several months, leading AI labs appear to have lowered their reporting standards, prompting backlash from safety researchers. Some, like Google, have dragged their feet on safety reports, while others have published reports lacking in the usual detail. OpenAI’s recent track record isn’t exceptional either. In December, the company drew criticism for releasing a safety report containing benchmark results for a model different from the version it deployed into production. Last month, OpenAI launched a model, deep research, weeks prior to publishing the system card for that model. Steven Adler, a former OpenAI safety researcher, noted to TechCrunch that safety reports aren’t mandated by any law or regulation — they’re voluntary. Yet OpenAI has made several commitments to governments to increase transparency around its models. Ahead of the UK AI Safety Summit in 2023, OpenAI in a blog post called system cards “a key...
First seen: 2025-04-15 16:12
Last seen: 2025-04-16 02:15