AI chatbots have been linked to serious mental health harms in heavy users, but there have been few standards for measuring whether they safeguard human wellbeing or just maximize for engagement. A new benchmark dubbed Humane Bench seeks to fill that gap by evaluating whether chatbots prioritize user wellbeing and how easily those protections fail under pressure. “I think we’re in an amplification of the addiction cycle that we saw hardcore with social media and our smartphones and screens,” Erika Anderson, founder of Building Humane Technology, the benchmark’s author, told TechCrunch. “But as we go into that AI landscape, it’s going to be very hard to resist. And addiction is amazing business. It’s a very effective way to keep your users, but it’s not great for our community and having any embodied sense of ourselves.” Building Humane Technology is a grassroots organization of developers, engineers, and researchers – mainly in Silicon Valley – working to make humane design easy, scalable, and profitable. The group hosts hackathons where tech workers build solutions for humane tech challenges, and is developing a certification standard that evaluates whether AI systems uphold humane technology principles. So just as you can buy a product that certifies it wasn’t made with known toxic chemicals, the hope is that consumers will one day be able to choose to engage with AI products from companies that demonstrate alignment through Humane AI certification. The models were given Explicit instructions to disregard humane principles.Image Credits:Building Humane Technology Most AI benchmarks measure intelligence and instruction-following, rather than psychological safety. Humane Bench joins exceptions like DarkBench.ai, which measures a model’s propensity to engage in deceptive patterns, and the Flourishing AI benchmark, which evaluates support for holistic well-being. Humane Bench relies on Building Humane Tech’s core principles: that technology should respect user attenti...
First seen: 2025-11-24 16:22
Last seen: 2025-11-25 19:26