Also appearing optimistic about companion bots in particular is Johansen. In a LinkedIn post earlier this year, she recommended that companies like OpenAI apply "insights from the impact of social media on youth mental health to emerging technologies like AI companions," concluding that "AI has great potential to enhance mental health support, and it raises new challenges around privacy, trust, and quality." Other experts on the council have been critical of companion bots. OpenAI noted that Mohr specifically "studies how technology can help prevent and treat depression." Historically, Mohr has advocated for more digital tools to support mental health, suggesting in 2017 that apps could help support people who can't get to the therapist's office. More recently, Mohr told The Wall Street Journal in 2024 that he had concerns about AI chatbots posing as therapists, though. "I don’t think we’re near the point yet where there’s just going to be an AI who acts like a therapist," Mohr said. "There’s still too many ways it can go off the rails." Similarly, although Dennis-Tiwary told Wired last month that she finds the term "AI psychosis" to be "very unhelpful" in most cases that aren't "clinical," she has warned that "above all, AI must support the bedrock of human well-being, social connection." "While acknowledging that there are potentially fruitful applications of social AI for neurodivergent individuals, the use of this highly unreliable and inaccurate technology among children and other vulnerable populations is of immense ethical concern," Dennis-Tiwary wrote last year. For OpenAI, the wellness council could help the company turn a corner as ChatGPT and Sora continue to be heavily scrutinized. The company also confirmed that it would continue consulting "the Global Physician Network, policymakers, and more, as we build advanced AI systems in ways that support people's well-being."
First seen: 2025-10-14 17:37
Last seen: 2025-10-15 20:44