Microsoft AI chief says it’s ‘dangerous’ to study AI consciousness

https://techcrunch.com/feed/ Hits: 23
Summary

AI models can respond to text, audio, and video in ways that sometimes fool people into thinking a human is behind the keyboard, but that doesn’t exactly make them conscious. It’s not like ChatGPT experiences sadness doing my tax return … right? Well, a growing number of AI researchers at labs like Anthropic are asking when — if ever — might AI models develop subjective experiences similar to living beings, and if they do, what rights should they have? The debate over whether AI models could one day be conscious — and deserve rights — is dividing Silicon Valley’s tech leaders. In Silicon Valley, this nascent field has become known as “AI welfare,” and if you think it’s a little out there, you’re not alone. Microsoft’s CEO of AI, Mustafa Suleyman, published a blog post on Tuesday arguing that the study of AI welfare is “both premature, and frankly dangerous.” Suleyman says that by adding credence to the idea that AI models could one day be conscious, these researchers are exacerbating human problems that we’re just starting to see around AI-induced psychotic breaks and unhealthy attachments to AI chatbots. Furthermore, Microsoft’s AI chief argues that the AI welfare conversation creates a new axis of division within society over AI rights in a “world already roiling with polarized arguments over identity and rights.” Suleyman’s views may sound reasonable, but he’s at odds with many in the industry. On the other end of the spectrum is Anthropic, which has been hiring researchers to study AI welfare and recently launched a dedicated research program around the concept. Last week, Anthropic’s AI welfare program gave some of the company’s models a new feature: Claude can now end conversations with humans who are being “persistently harmful or abusive.“ Techcrunch event San Francisco | October 27-29, 2025 Beyond Anthropic, researchers from OpenAI have independently embraced the idea of studying AI welfare. Google DeepMind recently posted a job listing for a researcher to ...

First seen: 2025-08-21 18:25

Last seen: 2025-08-22 17:20