child sex abuse materials

grok-was-finally-updated-to-stop-undressing-women-and-children,-x-safety-says

Grok was finally updated to stop undressing women and children, X Safety says


Grok scrutiny intensifies

California’s AG will investigate whether Musk’s nudifying bot broke US laws.

(EDITORS NOTE: Image contains profanity) An unofficially-installed poster picturing Elon Musk with the tagline, “Who the [expletive] would want to use social media with a built-in child abuse tool?” is displayed on a bus shelter on January 13, 2026 in London, England. Credit: Leon Neal / Staff | Getty Images News

Late Wednesday, X Safety confirmed that Grok was tweaked to stop undressing images of people without their consent.

“We have implemented technological measures to prevent the Grok account from allowing the editing of images of real people in revealing clothing such as bikinis,” X Safety said. “This restriction applies to all users, including paid subscribers.”

The update includes restricting “image creation and the ability to edit images via the Grok account on the X platform,” which “are now only available to paid subscribers. This adds an extra layer of protection by helping to ensure that individuals who attempt to abuse the Grok account to violate the law or our policies can be held accountable,” X Safety said.

Additionally, X will “geoblock the ability of all users to generate images of real people in bikinis, underwear, and similar attire via the Grok account and in Grok in X in those jurisdictions where it’s illegal,” X Safety said.

X’s update comes after weeks of sexualized images of women and children being generated with Grok finally prompting California Attorney General Rob Bonta to investigate whether Grok’s outputs break any US laws.

In a press release Wednesday, Bonta said that “xAI appears to be facilitating the large-scale production of deepfake nonconsensual intimate images that are being used to harass women and girls across the Internet, including via the social media platform X.”

Notably, Bonta appears to be as concerned about Grok’s standalone app and website being used to generate harmful images without consent as he is about the outputs on X.

Before today, X had not restricted the Grok app or website. X had only threatened to permanently suspend users who are editing images to undress women and children if the outputs are deemed “illegal content.” It also restricted the Grok chatbot on X from responding to prompts to undress images, but anyone with a Premium subscription could bypass that restriction, as could any free X user who clicked on the “edit” button on any image appearing on the social platform.

On Wednesday, prior to X Safety’s update, Elon Musk seemed to defend Grok’s outputs as benign, insisting that none of the reported images have fully undressed any minors, as if that would be the only problematic output.

“I [sic] not aware of any naked underage images generated by Grok,” Musk said in an X post. “Literally zero.”

Musk’s statement seems to ignore that researchers found harmful images where users specifically “requested minors be put in erotic positions and that sexual fluids be depicted on their bodies.” It also ignores that X previously voluntarily signed commitments to remove any intimate image abuse from its platform, as recently as 2024 recognizing that even partially nude images that victims wouldn’t want publicized could be harmful.

In the US, the Department of Justice considers “any visual depiction of sexually explicit conduct involving a person less than 18 years old” to be child pornography, which is also known as child sexual abuse material (CSAM).

The National Center for Missing and Exploited Children, which fields reports of CSAM found on X, told Ars that “technology companies have a responsibility to prevent their tools from being used to sexualize or exploit children.”

While many of Grok’s outputs may not be deemed CSAM, in normalizing the sexualization of children, Grok harms minors, advocates have warned. And in addition to finding images advertised as supposedly Grok-generated CSAM on the dark web, the Internet Watch Foundation noted that bad actors are using images edited by Grok to create even more extreme kinds of AI CSAM.

Grok faces probes in the US and UK

Bonta pointed to news reports documenting Grok’s worst outputs as the trigger of his probe.

“The avalanche of reports detailing the non-consensual, sexually explicit material that xAI has produced and posted online in recent weeks is shocking,” Bonta said. “This material, which depicts women and children in nude and sexually explicit situations, has been used to harass people across the Internet.”

Acting out of deep concern for victims and potential Grok targets, Bonta vowed to “determine whether and how xAI violated the law” and “use all the tools at my disposal to keep California’s residents safe.”

Bonta’s announcement came after the United Kingdom seemed to declare a victory after probing Grok over possible violations of the UK’s Online Safety Act, announcing that the harmful outputs had stopped.

That wasn’t the case, as The Verge once again pointed out; it conducted quick and easy tests using selfies of reporters to conclude that nothing had changed to prevent the outputs.

However, it seems that when Musk updated Grok to respond to some requests to undress images by refusing the prompts, it was enough for UK Prime Minister Keir Starmer to claim X had moved to comply with the law, Reuters reported.

Ars connected with a European nonprofit, AI Forensics, which tested to confirm that X had blocked some outputs in the UK. A spokesperson confirmed that their testing did not include probing if harmful outputs could be generated using X’s edit button.

AI Forensics plans to conduct further testing, but its spokesperson noted it would be unethical to test the “edit” button functionality that The Verge confirmed still works.

Last year, the Stanford Institute for Human-Centered Artificial Intelligence published research showing that Congress could “move the needle on model safety” by allowing tech companies to “rigorously test their generative models without fear of prosecution” for any CSAM red-teaming, Tech Policy Press reported. But until there is such a safe harbor carved out, it seems more likely that newly released AI tools could carry risks like those of Grok.

It’s possible that Grok’s outputs, if left unchecked, could have eventually put X in violation of the Take It Down Act, which comes into force in May and requires platforms to quickly remove AI revenge porn. One of the mothers of one of Musk’s children, Ashley St. Clair, has described Grok outputs using her images as revenge porn.

While the UK probe continues, Bonta has not yet made clear which laws he suspects X may be violating in the US. However, he emphasized that images with victims depicted in “minimal clothing” crossed a line, as well as images putting children in sexual positions.

As the California probe heats up, Bonta pushed X to take more actions to restrict Grok’s outputs, which one AI researcher suggested to Ars could be done with a few simple updates.

“I urge xAI to take immediate action to ensure this goes no further,” Bonta said. “We have zero tolerance for the AI-based creation and dissemination of nonconsensual intimate images or of child sexual abuse material.”

Seeming to take Bonta’s threat seriously, X Safety vowed to “remain committed to making X a safe platform for everyone and continue to have zero tolerance for any forms of child sexual exploitation, non-consensual nudity, and unwanted sexual content.”

This story was updated on January 14 to note X Safety’s updates.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Grok was finally updated to stop undressing women and children, X Safety says Read More »

apple-hit-with-$1.2b-lawsuit-after-killing-controversial-csam-detecting-tool

Apple hit with $1.2B lawsuit after killing controversial CSAM-detecting tool

When Apple devices are used to spread CSAM, it’s a huge problem for survivors, who allegedly face a range of harms, including “exposure to predators, sexual exploitation, dissociative behavior, withdrawal symptoms, social isolation, damage to body image and self-worth, increased risky behavior, and profound mental health issues, including but not limited to depression, anxiety, suicidal ideation, self-harm, insomnia, eating disorders, death, and other harmful effects.” One survivor told The Times she “lives in constant fear that someone might track her down and recognize her.”

Survivors suing have also incurred medical and other expenses due to Apple’s inaction, the lawsuit alleged. And those expenses will keep piling up if the court battle drags on for years and Apple’s practices remain unchanged.

Apple could win, a lawyer and policy fellow at the Stanford Institute for Human-Centered Artificial Intelligence, Riana Pfefferkorn, told The Times, as survivors face “significant hurdles” seeking liability for mishandling content that Apple says Section 230 shields. And a win for survivors could “backfire,” Pfefferkorn suggested, if Apple proves that forced scanning of devices and services violates the Fourth Amendment.

Survivors, some of whom own iPhones, think that Apple has a responsibility to protect them. In a press release, Margaret E. Mabie, a lawyer representing survivors, praised survivors for raising “a call for justice and a demand for Apple to finally take responsibility and protect these victims.”

“Thousands of brave survivors are coming forward to demand accountability from one of the most successful technology companies on the planet,” Mabie said. “Apple has not only rejected helping these victims, it has advertised the fact that it does not detect child sex abuse material on its platform or devices thereby exponentially increasing the ongoing harm caused to these victims.”

Apple hit with $1.2B lawsuit after killing controversial CSAM-detecting tool Read More »

explicit-deepfake-scandal-shuts-down-pennsylvania-school

Explicit deepfake scandal shuts down Pennsylvania school

An AI-generated nude photo scandal has shut down a Pennsylvania private school. On Monday, classes were canceled after parents forced leaders to either resign or face a lawsuit potentially seeking criminal penalties and accusing the school of skipping mandatory reporting of the harmful images.

The outcry erupted after a single student created sexually explicit AI images of nearly 50 female classmates at Lancaster Country Day School, Lancaster Online reported.

Head of School Matt Micciche seemingly first learned of the problem in November 2023, when a student anonymously reported the explicit deepfakes through a school portal run by the state attorney’s general office called “Safe2Say Something.” But Micciche allegedly did nothing, allowing more students to be targeted for months until police were tipped off in mid-2024.

Cops arrested the student accused of creating the harmful content in August. The student’s phone was seized as cops investigated the origins of the AI-generated images. But that arrest was not enough justice for parents who were shocked by the school’s failure to uphold mandatory reporting responsibilities following any suspicion of child abuse. They filed a court summons threatening to sue last week unless the school leaders responsible for the mishandled response resigned within 48 hours.

This tactic successfully pushed Micciche and the school board’s president, Angela Ang-Alhadeff, to “part ways” with the school, both resigning effective late Friday, Lancaster Online reported.

In a statement announcing that classes were canceled Monday, Lancaster Country Day School—which, according to Wikipedia, serves about 600 students in pre-kindergarten through high school—offered support during this “difficult time” for the community.

Parents do not seem ready to drop the suit, as the school leaders seemingly dragged their feet and resigned two days after their deadline. The parents’ lawyer, Matthew Faranda-Diedrich, told Lancaster Online Monday that “the lawsuit would still be pursued despite executive changes.”

Explicit deepfake scandal shuts down Pennsylvania school Read More »