nudify apps

asking-grok-to-delete-fake-nudes-may-force-victims-to-sue-in-musk’s-chosen-court

Asking Grok to delete fake nudes may force victims to sue in Musk’s chosen court


Millions likely harmed by Grok-edited sex images as X advertisers shrugged.

Journalists and advocates have been trying to grasp how many victims in total were harmed by Grok’s nudifying scandal after xAI delayed restricting outputs and app stores refused to cut off access for days.

The latest estimates show that perhaps millions were harmed in the days immediately after Elon Musk promoted Grok’s undressing feature on his own X feed by posting a pic of himself in a bikini.

Over just 11 days after Musk’s post, Grok sexualized more than 3 million images, of which 23,000 were of children, the Center for Countering Digital Hate (CCDH) estimated in research published Thursday.

That figure may be inflated, since CCDH did not analyze prompts and could not determine if images were already sexual prior to Grok’s editing. However, The New York Times shared the CCDH report alongside its own analysis, conservatively estimating that about 41 percent (1.8 million) of 4.4 million images Grok generated between December 31 and January 8 sexualized men, women, and children.

For xAI and X, the scandal brought scrutiny, but it also helped spike X engagement at a time when Meta’s rival app, Threads, has begun inching ahead of X in daily usage by mobile device users, TechCrunch reported. Without mentioning Grok, X’s head of product, Nikita Bier, celebrated the “highest engagement days on X” in an X post on January 6, just days before X finally started restricting some of Grok’s outputs for free users.

Whether or not xAI intended the Grok scandal to surge X and Grok use, that appears to be the outcome. The Times charted Grok trends and found that in the nine days prior to Musk’s post, combined, Grok was only used about 300,000 times to generate images, but after Musk’s post, “the number of images created by Grok surged to nearly 600,000 per day” on X.

In an article declaring that “Elon Musk cannot get away with this,” writers for The Atlantic suggested that X users “appeared to be imitating and showing off to one another,” believing that using Grok to create revenge porn “can make you famous.”

X has previously warned that X users who generate illegal content risk permanent suspensions, but X has not confirmed if any users have been banned since public outcry over Grok’s outputs began. Ars asked and will update this post if X provides any response.

xAI fights victim who begged Grok to remove images

At first, X only limited Grok’s image editing for some free users, which The Atlantic noted made it seem like X was “essentially marketing nonconsensual sexual images as a paid feature of the platform.”

But then, on January 14, X took its strongest action to restrict Grok’s harmful outputs—blocking outputs prompted by both free and paid X users. That move came after several countries, perhaps most notably the United Kingdom, and at least one state, California, launched probes.

Crucially, X’s updates did not apply to the Grok app or website; however, it can reportedly still be used to generate nonconsensual images.

That’s a problem for victims targeted by X users, according to Carrie Goldberg, a lawyer representing Ashley St. Clair, one of the first Grok victims to sue xAI; St. Clair also happens to be the mother of one of Musk’s children.

Goldberg told Ars that victims like St. Clair want changes on all Grok platforms, not just X. But it’s not easy to “compel that kind of product change in a lawsuit,” Goldberg said. That’s why St. Clair is hoping the court will agree that Grok is a public nuisance, a claim that provides some injunctive relief to prevent broader social harms if she wins.

Currently, St. Clair is seeking a temporary injunction that would block Grok from generating harmful images of her. But before she can get that order, if she wants a fair shot at winning the case, St. Clair must fight an xAI push counter-suing her and trying to move her lawsuit into Musk’s preferred Texas court, a recent court filing suggests.

In that fight, xAI is arguing that St. Clair is bound by xAI’s terms of service, which were updated the day after she notified the company of her intent to sue.

Alarmingly, xAI argued that St. Clair effectively agreed to the TOS when she started prompting Grok to delete her nonconsensual images—which is the only way X users had to get images removed quickly, St. Clair alleged. It seems xAI is hoping to turn moments of desperation, where victims beg Grok to remove images, into a legal shield.

In the filing, Goldberg wrote that St. Clair’s lawsuit has nothing to do with her own use of Grok, noting that the harassing images could have been made even if she never used any of xAI’s products. For that reason alone, xAI should not be able to force a change in venue.

Further, St. Clair’s use of Grok was clearly under duress, Goldberg argued, noting that one of the photos that Grok edited showed St. Clair’s toddler’s backpack.

“REMOVE IT!!!” St. Clair asked Grok, allegedly feeling increasingly vulnerable every second the images remained online.

Goldberg wrote that Barry Murphy, an X Safety employee, provided an affidavit that claimed that this instance and others of St. Clair “begging @Grok to remove illegal content constitutes an assent to xAI’s TOS.”

But “such cannot be the case,” Goldberg argued.

Faced with “the implicit threat that Grok would keep the images of St. Clair online and, possibly, create more of them,” St. Clair had little choice but to interact with Grok, Goldberg argued. And that prompting should not gut protections under New York law that St. Clair seeks to claim in her lawsuit, Goldberg argued, asking the court to void St. Clair’s xAI contract and reject xAI’s motion to switch venues.

Should St. Clair win her fight to keep the lawsuit in New York, the case could help set precedent for perhaps millions of other victims who may be contemplating legal action but fear facing xAI in Musk’s chosen court.

“It would be unjust to expect St. Clair to litigate in a state so far from her residence, and it may be so that trial in Texas will be so difficult and inconvenient that St. Clair effectively will be deprived of her day in court,” Goldberg argued.

Grok may continue harming kids

The estimated volume of sexualized images reported this week is alarming because it suggests that Grok, at the peak of the scandal, may have been generating more child sexual abuse material (CSAM) than X finds on its platform each month.

In 2024, X Safety reported 686,176 instances of CSAM to the National Center for Missing and Exploited Children, which, on average, is about 57,000 CSAM reports each month. If the CCDH’s estimate of 23,000 Grok outputs that sexualize children over an 11-day span is accurate, then an average monthly total may have exceeded 62,000 if Grok was left unchecked.

NCMEC did not immediately respond to Ars’ request to comment on how the estimated volume of Grok’s CSAM compares to X’s average CSAM reporting. But NCMEC previously told Ars that “whether an image is real or computer-generated, the harm is real, and the material is illegal.” That suggests Grok could remain a thorn in NCMEC’s side, as the CCDH has warned that even when X removes harmful Grok posts, “images could still be accessed via separate URLs,” suggesting that Grok’s CSAM and other harmful outputs could continue spreading. The CCDH also found instances of alleged CSAM that X had not removed as of January 15.

This is why child safety experts have advocated for more testing to ensure that AI tools like Grok don’t roll out capabilities like the undressing feature. NCMEC previously told Ars that “technology companies have a responsibility to prevent their tools from being used to sexualize or exploit children.” Amid a rise in AI-generated CSAM, the UK’s Internet Watch Foundation similarly warned that “it is unacceptable that technology is released which allows criminals to create this content.”

xAI advertisers, investors, partners remain silent

Yet, for Musk and xAI, there have been no meaningful consequences for Grok’s controversial outputs.

It’s possible that recently launched probes will result in legal action in California or fines in the UK or elsewhere, but those investigations will likely take months to conclude.

While US lawmakers have done little to intervene, some Democratic senators have attempted to ask Google and Apple CEOs why X and the Grok app were never restricted in their app stores, demanding a response by January 23. One day ahead of that deadline, senators confirmed to Ars that they’ve received no responses.

Unsurprisingly, neither Google nor Apple responded to Ars’ request to confirm whether a response is forthcoming or provide any statements on their decisions to keep the apps accessible. Both companies have been silent for weeks, along with other Big Tech companies that appear to be afraid to speak out against Musk’s chatbot.

Microsoft and Oracle, which “run Grok on their cloud services,” as well as Nvidia and Advanced Micro Devices, “which sell xAI the computer chips needed to train and run Grok,” declined The Atlantic’s request to comment on how the scandal has impacted their decisions to partner with xAI. Additionally, a dozen of xAI’s key investors simply didn’t respond when The Atlantic asked if “they would continue partnering with xAI absent the company changing its products.”

Similarly, dozens of advertisers refused Popular Information’s request to explain why there was no ad boycott over the Grok CSAM reports. That includes companies that once boycotted X over an antisemitic post from Musk, like “Amazon, Microsoft, and Google, all of which have advertised on X in recent days,” Popular Information reported.

It’s possible that advertisers fear Musk’s legal wrath if they boycott his platforms. The CCDH overcame a lawsuit from Musk last year, but that’s pending an appeal. And Musk’s so-called “thermonuclear” lawsuit against advertisers remains ongoing, with a trial date set for this October.

The Atlantic suggested that xAI stakeholders are likely hoping the Grok scandal will blow over and they’ll escape unscathed by staying silent. But so far, backlash has seemed to remain strong, perhaps because, while “deepfakes are not new,” xAI “has made them a dramatically larger problem than ever before,” The Atlantic opined.

“One of the largest forums dedicated to making fake images of real people,” Mr. Deepfakes, shut down in 2024 after public backlash over 43,000 sexual deepfake videos depicting about 3,800 individuals, the NYT reported. If the most recent estimates of Grok’s deepfakes are accurate, xAI shows how much more damage can be done when nudifying becomes a feature of one of the world’s biggest social networks, and nobody who has the power to stop it moves to intervene.

“This is industrial-scale abuse of women and girls,” Imran Ahmed, the CCDH’s chief executive, told NYT. “There have been nudifying tools, but they have never had the distribution, ease of use or the integration into a large platform that Elon Musk did with Grok.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Asking Grok to delete fake nudes may force victims to sue in Musk’s chosen court Read More »

apps-like-grok-are-explicitly-banned-under-google’s-rules—why-is-it-still-in-the-play-store?

Apps like Grok are explicitly banned under Google’s rules—why is it still in the Play Store?

Elon Musk’s xAI recently weakened content guard rails for image generation in the Grok AI bot. This led to a new spate of non-consensual sexual imagery on X, much of it aimed at silencing women on the platform. This, along with the creation of sexualized images of children in the more compliant Grok, has led regulators to begin investigating xAI. In the meantime, Google has rules in place for exactly this eventuality—it’s just not enforcing them.

It really could not be more clear from Google’s publicly available policies that Grok should have been banned yesterday. And yet, it remains in the Play Store. Not only that—it enjoys a T for Teen rating, one notch below the M-rated X app. Apple also still offers the Grok app on its platform, but its rules actually leave more wiggle room.

App content restrictions at Apple and Google have evolved in very different ways. From the start, Apple has been prone to removing apps on a whim, so developers have come to expect that Apple’s guidelines may not mention every possible eventuality. As Google has shifted from a laissez-faire attitude to more hard-nosed control of the Play Store, it has progressively piled on clarifications in the content policy. As a result, Google’s rules are spelled out in no uncertain terms, and Grok runs afoul of them.

Google has a dedicated support page that explains how to interpret its “Inappropriate Content” policy for the Play Store. Like Apple, the rules begin with a ban on apps that contain or promote sexual content including, but not limited to, pornography. That’s where Apple stops, but Google goes on to list more types of content and experiences that it considers against the rules.

“We don’t allow apps that contain or promote content associated with sexually predatory behavior, or distribute non-consensual sexual content,” the Play Store policy reads (emphasis ours). So the policy is taking aim at apps like Grok, but this line on its own could be read as focused on apps featuring “real” sexual content. However, Google is very thorough and has helpfully explained that this rule covers AI.

Play Store policy

Recent additions to Google’s Play Store policy explicitly ban apps like Grok.

Credit: Google

Recent additions to Google’s Play Store policy explicitly ban apps like Grok. Credit: Google

The detailed policy includes examples of content that violate this rule, which include much of what you’d expect—nothing lewd or profane, no escort services, and no illegal sexual themes. After a spate of rudimentary “nudify” apps in 2020 and 2021, Google added language to this page clarifying that “apps that claim to undress people” are not allowed in Google Play. In 2023, as the AI boom got underway, Google added another line to note that it also would remove apps that contained “non-consensual sexual content created via deepfake or similar technology.”

Apps like Grok are explicitly banned under Google’s rules—why is it still in the Play Store? Read More »

teen-sues-to-destroy-the-nudify-app-that-left-her-in-constant-fear

Teen sues to destroy the nudify app that left her in constant fear

A spokesperson told The Wall Street Journal that “nonconsensual pornography and the tools to create it are explicitly forbidden by Telegram’s terms of service and are removed whenever discovered.”

For the teen suing, the prime target remains ClothOff itself. Her lawyers think it’s possible that she can get the app and its affiliated sites blocked in the US, the WSJ reported, if ClothOff fails to respond and the court awards her default judgment.

But no matter the outcome of the litigation, the teen expects to be forever “haunted” by the fake nudes that a high school boy generated without facing any charges.

According to the WSJ, the teen girl sued the boy who she said made her want to drop out of school. Her complaint noted that she was informed that “the individuals responsible and other potential witnesses failed to cooperate with, speak to, or provide access to their electronic devices to law enforcement.”

The teen has felt “mortified and emotionally distraught, and she has experienced lasting consequences ever since,” her complaint said. She has no idea if ClothOff can continue to distribute the harmful images, and she has no clue how many teens may have posted them online. Because of these unknowns, she’s certain she’ll spend “the remainder of her life” monitoring “for the resurfacing of these images.”

“Knowing that the CSAM images of her will almost inevitably make their way onto the Internet and be retransmitted to others, such as pedophiles and traffickers, has produced a sense of hopelessness” and “a perpetual fear that her images can reappear at any time and be viewed by countless others, possibly even friends, family members, future partners, colleges, and employers, or the public at large,” her complaint said.

The teen’s lawsuit is the newest front in a wider attempt to crack down on AI-generated CSAM and NCII. It follows prior litigation filed by San Francisco City Attorney David Chiu last year that targeted ClothOff, among 16 popular apps used to “nudify” photos of mostly women and young girls.

About 45 states have criminalized fake nudes, the WSJ reported, and earlier this year, Donald Trump signed the Take It Down Act into law, which requires platforms to remove both real and AI-generated NCII within 48 hours of victims’ reports.

Teen sues to destroy the nudify app that left her in constant fear Read More »

to-shield-kids,-california-hikes-fake-nude-fines-to-$250k-max

To shield kids, California hikes fake nude fines to $250K max

California is cracking down on AI technology deemed too harmful for kids, attacking two increasingly notorious child safety fronts: companion bots and deepfake pornography.

On Monday, Governor Gavin Newsom signed the first-ever US law regulating companion bots after several teen suicides sparked lawsuits.

Moving forward, California will require any companion bot platforms—including ChatGPT, Grok, Character.AI, and the like—to create and make public “protocols to identify and address users’ suicidal ideation or expressions of self-harm.”

They must also share “statistics regarding how often they provided users with crisis center prevention notifications to the Department of Public Health,” the governor’s office said. Those stats will also be posted on the platforms’ websites, potentially helping lawmakers and parents track any disturbing trends.

Further, companion bots will be banned from claiming that they’re therapists, and platforms must take extra steps to ensure child safety, including providing kids with break reminders and preventing kids from viewing sexually explicit images.

Additionally, Newsom strengthened the state’s penalties for those who create deepfake pornography, which could help shield young people, who are increasingly targeted with fake nudes, from cyber bullying.

Now any victims, including minors, can seek up to $250,000 in damages per deepfake from any third parties who knowingly distribute nonconsensual sexually explicit material created using AI tools. Previously, the state allowed victims to recover “statutory damages of not less than $1,500 but not more than $30,000, or $150,000 for a malicious violation.”

Both laws take effect January 1, 2026.

American families “are in a battle” with AI

The companion bot law’s sponsor, Democratic Senator Steve Padilla, said in a press release celebrating the signing that the California law demonstrates how to “put real protections into place” and said it “will become the bedrock for further regulation as this technology develops.”

To shield kids, California hikes fake nude fines to $250K max Read More »

nudify-app’s-plan-to-dominate-deepfake-porn-hinges-on-reddit,-docs-show

Nudify app’s plan to dominate deepfake porn hinges on Reddit, docs show


Report: Clothoff ignored California’s lawsuit while buying up 10 rivals.

Clothoff—one of the leading apps used to quickly and cheaply make fake nudes from images of real people—reportedly is planning a global expansion to continue dominating deepfake porn online.

Also known as a nudify app, Clothoff has resisted attempts to unmask and confront its operators. Last August, the app was among those that San Francisco’s city attorney, David Chiu, sued in hopes of forcing a shutdown. But recently, a whistleblower—who had “access to internal company information” as a former Clothoff employee—told the investigative outlet Der Spiegel that the app’s operators “seem unimpressed by the lawsuit” and instead of worrying about shutting down have “bought up an entire network of nudify apps.”

Der Spiegel found evidence that Clothoff today owns at least 10 other nudify services, attracting “monthly views ranging between hundreds of thousands to several million.” The outlet granted the whistleblower anonymity to discuss the expansion plans, which the whistleblower claimed was motivated by Clothoff employees growing “cynical” and “obsessed with money” over time as the app—which once felt like an “exciting startup”—gained momentum. Because generating convincing fake nudes can cost just a few bucks, chasing profits seemingly relies on attracting as many repeat users to as many destinations as possible.

Currently, Clothoff runs on an annual budget of around $3.5 million, the whistleblower told Der Spiegel. It has shifted its marketing methods since its launch, apparently now largely relying on Telegram bots and X channels to target ads at young men likely to use their apps.

Der Spiegel’s report documents Clothoff’s “large-scale marketing plan” to expand into the German market, as revealed by the whistleblower. The alleged campaign hinges on producing “naked images of well-known influencers, singers, and actresses,” seeking to entice ad clicks with the tagline “you choose who you want to undress.”

A few of the stars named in the plan confirmed to Der Spiegel that they never agreed to this use of their likenesses, with some of their representatives suggesting that they would pursue legal action if the campaign is ever launched.

However, even celebrities like Taylor Swift have struggled to combat deepfake nudes spreading online, while tools like Clothoff are increasingly used to torment young girls in middle and high school.

Similar celebrity campaigns are planned for other markets, Der Spiegel reported, including British, French, and Spanish markets. And Clothoff has notably already become a go-to tool in the US, not only targeted in the San Francisco city attorney’s lawsuit, but also in a complaint raised by a high schooler in New Jersey suing a boy who used Clothoff to nudify one of her Instagram photos taken when she was 14 years old, then shared it with other boys on Snapchat.

Clothoff is seemingly hoping to entice more young boys worldwide to use its apps for such purposes. The whistleblower told Der Spiegel that most of Clothoff’s marketing budget goes toward “advertising posts in special Telegram channels, in sex subs on Reddit, and on 4chan.”

In ads, the app planned to specifically target “men between 16 and 35” who like benign stuff like “memes” and “video games,” as well as more toxic stuff like “right-wing extremist ideas,” “misogyny,” and “Andrew Tate,” an influencer criticized for promoting misogynistic views to teen boys.

Chiu was hoping to defend young women increasingly targeted in fake nudes by shutting down Clothoff, along with several other nudify apps targeted in his lawsuit. But so far, while Chiu has reached a settlement shutting down two websites, porngen.art and undresser.ai, attempts to serve Clothoff through available legal channels have not been successful, deputy press secretary for Chiu’s office, Alex Barrett-Shorter, told Ars.

Meanwhile, Clothoff continues to evolve, recently marketing a feature that Clothoff claims attracted more than a million users eager to make explicit videos out of a single picture.

Clothoff denies it plans to use influencers

Der Spiegel’s efforts to unmask the operators of Clothoff led the outlet to Eastern Europe, after reporters stumbled upon a “database accidentally left open on the Internet” that seemingly exposed “four central people behind the website.”

This was “consistent,” Der Spiegel said, with a whistleblower claim that all Clothoff employees “work in countries that used to belong to the Soviet Union.” Additionally, Der Spiegel noted that all Clothoff internal communications it reviewed were written in Russian, and the site’s email service is based in Russia.

A person claiming to be a Clothoff spokesperson named Elias denied knowing any of the four individuals flagged in their investigation, Der Spiegel reported, and disputed the $3 million budget figure. Elias claimed a nondisclosure agreement prevented him from discussing Clothoff’s team any further. However, soon after reaching out, Der Spiegel noted that Clothoff took down the database, which had a name that translated to “my babe.”

Regarding the shared marketing plan for global expansion, Elias denied that Clothoff intended to use celebrity influencers, saying that “Clothoff forbids the use of photos of people without their consent.”

He also denied that Clothoff could be used to nudify images of minors; however, one Clothoff user who spoke to Der Spiegel on the condition of anonymity, confirmed that his attempt to generate a fake nude of a US singer failed initially because she “looked like she might be underage.” But his second attempt a few days later successfully generated the fake nude with no problem. That suggests Clothoff’s age detection may not work perfectly.

As Clothoff’s growth appears unstoppable, the user explained to Der Spiegel why he doesn’t feel that conflicted about using the app to generate fake nudes of a famous singer.

“There are enough pictures of her on the Internet as it is,” the user reasoned.

However, that user draws the line at generating fake nudes of private individuals, insisting, “If I ever learned of someone producing such photos of my daughter, I would be horrified.”

For young boys who appear flippant about creating fake nude images of their classmates, the consequences have ranged from suspensions to juvenile criminal charges, and for some, there could be other costs. In the lawsuit where the high schooler is attempting to sue a boy who used Clothoff to bully her, there’s currently resistance from boys who participated in group chats to share what evidence they have on their phones. If she wins her fight, she’s asking for $150,000 in damages per image shared, so sharing chat logs could potentially increase the price tag.

Since she and the San Francisco city attorney each filed their lawsuits, the Take It Down Act has passed. That law makes it easier to force platforms to remove AI-generated fake nudes. But experts expect the law will face legal challenges over censorship fears, so the very limited legal tool might not withstand scrutiny.

Either way, the Take It Down Act is a safeguard that came too late for the earliest victims of nudify apps in the US, only some of whom are turning to courts seeking justice due to largely opaque laws that made it unclear if generating a fake nude was illegal.

“Jane Doe is one of many girls and women who have been and will continue to be exploited, abused, and victimized by non-consensual pornography generated through artificial intelligence,” the high schooler’s complaint noted. “Despite already being victimized by Defendant’s actions, Jane Doe has been forced to bring this action to protect herself and her rights because the governmental institutions that are supposed to protect women and children from being violated and exploited by the use of AI to generate child pornography and nonconsensual nude images failed to do so.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Nudify app’s plan to dominate deepfake porn hinges on Reddit, docs show Read More »

nj-teen-wins-fight-to-put-nudify-app-users-in-prison,-impose-fines-up-to-$30k

NJ teen wins fight to put nudify app users in prison, impose fines up to $30K


Here’s how one teen plans to fix schools failing kids affected by nudify apps.

When Francesca Mani was 14 years old, boys at her New Jersey high school used nudify apps to target her and other girls. At the time, adults did not seem to take the harassment seriously, telling her to move on after she demanded more severe consequences than just a single boy’s one or two-day suspension.

Mani refused to take adults’ advice, going over their heads to lawmakers who were more sensitive to her demands. And now, she’s won her fight to criminalize deepfakes. On Wednesday, New Jersey Governor Phil Murphy signed a law that he said would help victims “take a stand against deceptive and dangerous deepfakes” by making it a crime to create or share fake AI nudes of minors or non-consenting adults—as well as deepfakes seeking to meddle with elections or damage any individuals’ or corporations’ reputations.

Under the law, victims targeted by nudify apps like Mani can sue bad actors, collecting up to $1,000 per harmful image created either knowingly or recklessly. New Jersey hopes these “more severe consequences” will deter kids and adults from creating harmful images, as well as emphasize to schools—whose lax response to fake nudes has been heavily criticized—that AI-generated nude images depicting minors are illegal and must be taken seriously and reported to police. It imposes a maximum fine of $30,000 on anyone creating or sharing deepfakes for malicious purposes, as well as possible punitive damages if a victim can prove that images were created in willful defiance of the law.

Ars could not reach Mani for comment, but she celebrated the win in the governor’s press release, saying, “This victory belongs to every woman and teenager told nothing could be done, that it was impossible, and to just move on. It’s proof that with the right support, we can create change together.”

On LinkedIn, her mother, Dorota Mani—who has been working with the governor’s office on a commission to protect kids from online harms—thanked lawmakers like Murphy and former New Jersey Assemblyman Herb Conaway, who sponsored the law, for “standing with us.”

“When used maliciously, deepfake technology can dismantle lives, distort reality, and exploit the most vulnerable among us,” Conaway said. “I’m proud to have sponsored this legislation when I was still in the Assembly, as it will help us keep pace with advancing technology. This is about drawing a clear line between innovation and harm. It’s time we take a firm stand to protect individuals from digital deception, ensuring that AI serves to empower our communities.”

Doing nothing is no longer an option for schools, teen says

Around the country, as cases like Mani’s continue to pop up, experts expect that shame prevents most victims from coming forward to flag abuses, suspecting that the problem is much more widespread than media reports suggest.

Encode Justice has a tracker monitoring reported cases involving minors, including allowing victims to anonymously report harms around the US. But the true extent of the harm currently remains unknown, as cops warn of a flood of AI child sex images obscuring investigations into real-world child abuse.

Confronting this shadowy threat to kids everywhere, Mani was named as one of TIME’s most influential people in AI last year due to her advocacy fighting deepfakes. She’s not only pressured lawmakers to take strong action to protect vulnerable people, but she’s also pushed for change at tech companies and in schools nationwide.

“When that happened to me and my classmates, we had zero protection whatsoever,” Mani told TIME, and neither did other girls around the world who had been targeted and reached out to thank her for fighting for them. “There were so many girls from different states, different countries. And we all had three things in common: the lack of AI school policies, the lack of laws, and the disregard of consent.”

Yiota Souras, chief legal officer at the National Center for Missing and Exploited Children, told CBS News last year that protecting teens started with laws that criminalize sharing fake nudes and provide civil remedies, just as New Jersey’s law does. That way, “schools would have protocols,” she said, and “investigators and law enforcement would have roadmaps on how to investigate” and “what charges to bring.”

Clarity is urgently needed in schools, advocates say. At Mani’s school, the boys who shared the photos had their names shielded and were pulled out of class individually to be interrogated, but victims like Mani had no privacy whatsoever. Their names were blared over the school’s loud system, as boys mocked their tears in the hallway. To this day, it’s unclear who exactly shared and possibly still has copies of the images, which experts say could haunt Mani throughout her life. And the school’s inadequate response was a major reason why Mani decided to take a stand, seemingly viewing the school as a vehicle furthering her harassment.

“I realized I should stop crying and be mad, because this is unacceptable,” Mani told CBS News.

Mani pushed for NJ’s new law and claimed the win, but she thinks that change must start at schools, where the harassment starts. In her school district, the “harassment, intimidation and bullying” policy was updated to incorporate AI harms, but she thinks schools should go even further. Working with Encode Justice, she is helping to push a plan to fix schools failing kids targeted by nudify apps.

“My goal is to protect women and children—and we first need to start with AI school policies, because this is where most of the targeting is happening,” Mani told TIME.

Encode Justice did not respond to Ars’ request to comment. But their plan noted a common pattern in schools throughout the US. Students learn about nudify apps through ads on social media—such as Instagram reportedly driving 90 percent of traffic to one such nudify app—where they can also usually find innocuous photos of classmates to screenshot. Within seconds, the apps can nudify the screenshotted images, which Mani told CBS News then spread “rapid fire”  by text message and DMs, and often shared over school networks.

To end the abuse, schools need to be prepared, Encode Justice said, especially since “their initial response can sometimes exacerbate the situation.”

At Mani’s school, for example, leadership was criticized for announcing the victims’ names over the loudspeaker, which Encode Justice said never should have happened. Another misstep was at a California middle school, which delayed action for four months until parents went to police, Encode Justice said. In Texas, a school failed to stop images from spreading for eight months while a victim pleaded for help from administrators and police who failed to intervene. The longer the delays, the more victims will likely be targeted. In Pennsylvania, a single ninth grader targeted 46 girls before anyone stepped in.

Students deserve better, Mani feels, and Encode Justice’s plan recommends that all schools create action plans to stop failing students and respond promptly to stop image sharing.

That starts with updating policies to ban deepfake sexual imagery, then clearly communicating to students “the seriousness of the issue and the severity of the consequences.” Consequences should include identifying all perpetrators and issuing suspensions or expulsions on top of any legal consequences students face, Encode Justice suggested. They also recommend establishing “written procedures to discreetly inform relevant authorities about incidents and to support victims at the start of an investigation on deepfake sexual abuse.” And, critically, all teachers must be trained on these new policies.

“Doing nothing is no longer an option,” Mani said.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

NJ teen wins fight to put nudify app users in prison, impose fines up to $30K Read More »