This week, OpenAI launched a new image generator in ChatGPT, which quickly went viral for its ability to create Studio Ghibli-style images. Beyond the pastel illustrations, GPT-4o’s native image generator significantly upgrades ChatGPT’s capabilities, improving picture editing, text rendering, and spatial representation. However, one of the most notable changes OpenAI made this week involves its content moderation policies, which now allow ChatGPT to, upon request, generate images depicting public figures, hateful symbols, and racial features. OpenAI previously rejected these types of prompts for being too controversial or harmful. But now, the company has “evolved” its approach, according to a blog post published Thursday by OpenAI’s model behavior lead, Joanne Jang. “We’re shifting from blanket refusals in sensitive areas to a more precise approach focused on preventing real-world harm,” said Jang. “The goal is to embrace humility: recognizing how much we don’t know, and positioning ourselves to adapt as we learn.” These adjustments seem to be part of OpenAI’s larger plan to effectively “uncensor” ChatGPT. OpenAI announced in February that it’s starting to change how it trains AI models, with the ultimate goal of letting ChatGPT handle more requests, offer diverse perspectives, and reduce topics the chatbot refuses to work with. Under the updated policy, ChatGPT can now generate and modify images of Donald Trump, Elon Musk, and other public figures that OpenAI did not used to allow. Jang says OpenAI doesn’t want to be the arbiter of status, choosing who should and shouldn’t be allowed to be generated by ChatGPT. Instead, the company is giving users an opt-out option if they don’t want ChatGPT depicting them. In a white paper released Tuesday, OpenAI also said it will allow ChatGPT users to “generate hateful symbols,” such as swastikas, in educational or neutral contexts, as long as they don’t “clearly praise or endorse extremist agendas.” Moreover, OpenAI is chang...
First seen: 2025-03-28 16:25
Last seen: 2025-03-30 14:33