Ignoring the adults in the room
OpenAI draws a line between AI “smut” and porn. Experts fear it’s all unhealthy.
OpenAI cannot escape the doom cloud swirling around its rollout of a text-based “adult mode” in ChatGPT.
Late Sunday, The Wall Street Journal reported that insiders confirmed that OpenAI’s “handpicked council of advisers on well-being and AI” were “freaking out” over the company’s plans to move ahead with “adult mode,” despite their urgent warnings.
Back in January, council members unanimously warned OpenAI that “AI-powered erotica could foster unhealthy emotional dependence on ChatGPT for users and that minors could find ways to access sex chats,” sources told the WSJ. One expert suggested that without major updates to ChatGPT, OpenAI risked creating a “sexy suicide coach” for vulnerable users prone to form intense bonds with their companion bots.
OpenAI’s wellness council was created in October. It was put together after backlash following the first-known case of a minor’s ChatGPT-linked suicide, and it was curiously announced on the same day that Sam Altman broadcast on X that “adult mode” would be coming soon to ChatGPT.
Back then, OpenAI’s goal was to update ChatGPT to safeguard sensitive users by consulting “leading researchers and experts with decades of experience studying how technology affects our emotions, motivation, and mental health.” However, there have been more suicide cases since then, including two involving middle-aged men whose families discovered disturbing chat logs where ChatGPT seemed to weaponize its growing bond with users to incite self-harm and other violence, including murder.
Notably, the council does not include a suicide prevention expert, but even experts who perhaps aren’t laser-focused on reducing ChatGPT’s suicide risks are panicked by OpenAI’s erotica plans, WSJ reported.
Unfortunately, it’s already clear to those experts how such a scenario could play out. Sewell Setzer III was the first child lost after he became obsessed with exchanging sexualized chats with Character.AI chatbots, including one named for the Game of Thrones character Daenerys Targaryen. After his family sued, Character.AI cut off underage users within a week and eventually settled the lawsuit.
For OpenAI, the Setzer case will likely cast a long shadow, even though an OpenAI spokesperson told the WSJ that it’s training ChatGPT “not to encourage exclusive relationships with users, and to remind users that they need to have relationships in the real world.” The company’s reassurances—including describing ChatGPT outputs as “smut,” rather than pornography—seem to ignore chat logs showing that Setzer formed sexualized bonds to several different chatbots that did not make extremely graphic references.
Rather, the chatbots narrated themselves casting “sexy” or “naughty” looks and making seductive gestures, some logs show. Those logs suggest that American businessman Mark Cuban was perhaps right when he cautioned Altman in October that the danger of kids’ access to ChatGPT’s “adult mode” is “not about porn.”
“This is about kids developing ‘relationships’ with an LLM that could take them in any number of very personal directions,” Cuban wrote on the X thread where Altman announced the erotica feature, Fortune reported.
Elsewhere on X, critics slammed Altman and OpenAI for shamelessly pivoting to erotica to keep users engaged after Altman admitted in August that ChatGPT’s chat use case was “saturated” and had hit a limit.
“They’re not going to get much better,” Altman said. “And maybe they’re going to get worse.” And Altman did himself no favors by boasting around the same time that times were not so desperate that OpenAI had to “put a sex bot avatar in ChatGPT yet.”
With overall ChatGPT user spending reportedly “stalled” and subscriptions in Europe “flatlining,” Fortune noted that OpenAI may have had little choice but to launch “adult mode” to compete with rivals quickly catching up to its capabilities and posing a threat to ChatGPT’s popularity.
“Announcements like allowing erotica in ChatGPT may signal that AI companies are fighting harder than ever to achieve growth, and will sacrifice longer-term consumer trust for the sake of short-term profit,” Fortune reported.
Since AI erotica is predicted to be a big moneymaker for the AI industry, some insiders told the WSJ that they agreed that OpenAI seems to be “bending to financial incentives to try to make people attached to its models.”
The move could possibly end up hurting ChatGPT’s popularity. For parents, Cuban suggested, ChatGPT will likely fall out of favor, especially since insiders told the WSJ that OpenAI’s age verification is spotty and unlikely to keep kids from accessing adult-themed chats. And that’s not users’ only concern about OpenAI age-gating “adult mode” when it launches later this year.
Spotty age checks may spark more outcry
OpenAI has long largely banned explicit content, mostly out of fears that minors may be exposed to pornography or that users generally may be exposed to themes of violent sexual exploitation.
Initially, the AI firm planned to reverse that policy and launch “Naughty Chats” within the first three months of 2026. However, while OpenAI recently confirmed that it was delaying the launch until later this year to “prioritize other products,” insiders told the WSJ that the pause was also “due in part to internal concerns and technical challenges.” That apparently included OpenAI’s struggles to effectively block minors from accessing “Naughty Chats.”
According to insiders, OpenAI’s “new age-prediction system aimed at keeping minors from having adult-themed chats was at one point misclassifying minors as adults about 12 percent of the time.” Rolling out at that success rate risked perhaps millions of minors easily dodging age gates to get to the sexy chatbots, sources said.
Avoiding that risk, Fidji Simo, OpenAI’s chief executive of applications, confirmed last December that OpenAI was working on improving the accuracy of its age prediction tool. It remains unclear how much improvement may have been made in the months since then, but a statement from a spokesperson for OpenAI to the WSJ may not inspire confidence in some parents: “The company’s age prediction algorithms show performance similar to the rest of the industry, but will never be completely foolproof.”
For adult users, the effectiveness of OpenAI’s age prediction tool could become the next wave of backlash when “adult mode” becomes a reality.
OpenAI has confirmed that any users whose ages cannot be predicted must undergo age verification through a service called Persona to access features like “Naughty Chats.” Already, this is panicking developers who have noted in forums that OpenAI’s age checks create substantial privacy risks for all users by allowing Persona to scan selfies or check IDs and temporarily store that data. Developers were particularly horrified that there seem to be “unexplained ‘could not verify your identity’ errors” forcing users to resubmit sensitive data and that users have only limited support options from either OpenAI or Persona to resolve the errors.
Most likely, users who encounter any issues when attempting to eventually access “adult mode” will share these frustrations, if left unresolved. Last month, Discord faced substantial backlash after announcing a global rollout of age checks and then running a limited test using Persona in the United Kingdom, with users calling out Persona as too invasive. At that time, Persona’s CEO, Rick Song, defended Persona’s services, as hackers attempted to break into its systems. Discord ended up dropping Persona as a vendor and pausing its global age check launch.
ChatGPT’s “adult” filters have been buggy
Sources told the WSJ that they doubted if OpenAI’s tools were ready to lock kids out of prohibited content.
Their whistleblowing comes after OpenAI fired a top safety executive who opposed the release of “adult mode.” OpenAI denied the firing was related, but the exiting staffer directly criticized both the AI firm’s ability to block kids from content and stop outputs from promoting child exploitation. Further, a second former safety staffer also spoke out last fall, warning that parents shouldn’t trust OpenAI’s “adult mode” claims.
To counter this narrative, OpenAI’s spokesperson promised that the company “has a developed plan to monitor for a range of potential long-term effects of adult mode, both positive and negative.”
However, that plan was likely developed with the very experts the WSJ reported are staunchly opposing the roll-out, leaving parents to wonder if OpenAI cares about advice from its youth well-being team or not.
On top of ineffective age checks or clever minors who dodge age gates, OpenAI may get in trouble with parents if its own systems unexpectedly fail. Back in April when OpenAI started dabbling with more risqué outputs, OpenAI fixed a bug that TechCrunch testing found was allowing minors to access graphic erotica on ChatGPT. It seems that OpenAI’s filters broke that were supposed to clearly restrict “sensitive content like erotica to narrow contexts such as scientific, historical, or news reporting.”
“In this case, a bug allowed responses outside those guidelines, and we are actively deploying a fix to limit these generations,” OpenAI said at the time.
OpenAI did not respond to Ars’ request to comment.
If you or someone you know is feeling suicidal or in distress, please call the Suicide Prevention Lifeline number by dialing 988, which will put you in touch with a local crisis center.
