nudify apps

nudify-app’s-plan-to-dominate-deepfake-porn-hinges-on-reddit,-docs-show

Nudify app’s plan to dominate deepfake porn hinges on Reddit, docs show


Report: Clothoff ignored California’s lawsuit while buying up 10 rivals.

Clothoff—one of the leading apps used to quickly and cheaply make fake nudes from images of real people—reportedly is planning a global expansion to continue dominating deepfake porn online.

Also known as a nudify app, Clothoff has resisted attempts to unmask and confront its operators. Last August, the app was among those that San Francisco’s city attorney, David Chiu, sued in hopes of forcing a shutdown. But recently, a whistleblower—who had “access to internal company information” as a former Clothoff employee—told the investigative outlet Der Spiegel that the app’s operators “seem unimpressed by the lawsuit” and instead of worrying about shutting down have “bought up an entire network of nudify apps.”

Der Spiegel found evidence that Clothoff today owns at least 10 other nudify services, attracting “monthly views ranging between hundreds of thousands to several million.” The outlet granted the whistleblower anonymity to discuss the expansion plans, which the whistleblower claimed was motivated by Clothoff employees growing “cynical” and “obsessed with money” over time as the app—which once felt like an “exciting startup”—gained momentum. Because generating convincing fake nudes can cost just a few bucks, chasing profits seemingly relies on attracting as many repeat users to as many destinations as possible.

Currently, Clothoff runs on an annual budget of around $3.5 million, the whistleblower told Der Spiegel. It has shifted its marketing methods since its launch, apparently now largely relying on Telegram bots and X channels to target ads at young men likely to use their apps.

Der Spiegel’s report documents Clothoff’s “large-scale marketing plan” to expand into the German market, as revealed by the whistleblower. The alleged campaign hinges on producing “naked images of well-known influencers, singers, and actresses,” seeking to entice ad clicks with the tagline “you choose who you want to undress.”

A few of the stars named in the plan confirmed to Der Spiegel that they never agreed to this use of their likenesses, with some of their representatives suggesting that they would pursue legal action if the campaign is ever launched.

However, even celebrities like Taylor Swift have struggled to combat deepfake nudes spreading online, while tools like Clothoff are increasingly used to torment young girls in middle and high school.

Similar celebrity campaigns are planned for other markets, Der Spiegel reported, including British, French, and Spanish markets. And Clothoff has notably already become a go-to tool in the US, not only targeted in the San Francisco city attorney’s lawsuit, but also in a complaint raised by a high schooler in New Jersey suing a boy who used Clothoff to nudify one of her Instagram photos taken when she was 14 years old, then shared it with other boys on Snapchat.

Clothoff is seemingly hoping to entice more young boys worldwide to use its apps for such purposes. The whistleblower told Der Spiegel that most of Clothoff’s marketing budget goes toward “advertising posts in special Telegram channels, in sex subs on Reddit, and on 4chan.”

In ads, the app planned to specifically target “men between 16 and 35” who like benign stuff like “memes” and “video games,” as well as more toxic stuff like “right-wing extremist ideas,” “misogyny,” and “Andrew Tate,” an influencer criticized for promoting misogynistic views to teen boys.

Chiu was hoping to defend young women increasingly targeted in fake nudes by shutting down Clothoff, along with several other nudify apps targeted in his lawsuit. But so far, while Chiu has reached a settlement shutting down two websites, porngen.art and undresser.ai, attempts to serve Clothoff through available legal channels have not been successful, deputy press secretary for Chiu’s office, Alex Barrett-Shorter, told Ars.

Meanwhile, Clothoff continues to evolve, recently marketing a feature that Clothoff claims attracted more than a million users eager to make explicit videos out of a single picture.

Clothoff denies it plans to use influencers

Der Spiegel’s efforts to unmask the operators of Clothoff led the outlet to Eastern Europe, after reporters stumbled upon a “database accidentally left open on the Internet” that seemingly exposed “four central people behind the website.”

This was “consistent,” Der Spiegel said, with a whistleblower claim that all Clothoff employees “work in countries that used to belong to the Soviet Union.” Additionally, Der Spiegel noted that all Clothoff internal communications it reviewed were written in Russian, and the site’s email service is based in Russia.

A person claiming to be a Clothoff spokesperson named Elias denied knowing any of the four individuals flagged in their investigation, Der Spiegel reported, and disputed the $3 million budget figure. Elias claimed a nondisclosure agreement prevented him from discussing Clothoff’s team any further. However, soon after reaching out, Der Spiegel noted that Clothoff took down the database, which had a name that translated to “my babe.”

Regarding the shared marketing plan for global expansion, Elias denied that Clothoff intended to use celebrity influencers, saying that “Clothoff forbids the use of photos of people without their consent.”

He also denied that Clothoff could be used to nudify images of minors; however, one Clothoff user who spoke to Der Spiegel on the condition of anonymity, confirmed that his attempt to generate a fake nude of a US singer failed initially because she “looked like she might be underage.” But his second attempt a few days later successfully generated the fake nude with no problem. That suggests Clothoff’s age detection may not work perfectly.

As Clothoff’s growth appears unstoppable, the user explained to Der Spiegel why he doesn’t feel that conflicted about using the app to generate fake nudes of a famous singer.

“There are enough pictures of her on the Internet as it is,” the user reasoned.

However, that user draws the line at generating fake nudes of private individuals, insisting, “If I ever learned of someone producing such photos of my daughter, I would be horrified.”

For young boys who appear flippant about creating fake nude images of their classmates, the consequences have ranged from suspensions to juvenile criminal charges, and for some, there could be other costs. In the lawsuit where the high schooler is attempting to sue a boy who used Clothoff to bully her, there’s currently resistance from boys who participated in group chats to share what evidence they have on their phones. If she wins her fight, she’s asking for $150,000 in damages per image shared, so sharing chat logs could potentially increase the price tag.

Since she and the San Francisco city attorney each filed their lawsuits, the Take It Down Act has passed. That law makes it easier to force platforms to remove AI-generated fake nudes. But experts expect the law will face legal challenges over censorship fears, so the very limited legal tool might not withstand scrutiny.

Either way, the Take It Down Act is a safeguard that came too late for the earliest victims of nudify apps in the US, only some of whom are turning to courts seeking justice due to largely opaque laws that made it unclear if generating a fake nude was illegal.

“Jane Doe is one of many girls and women who have been and will continue to be exploited, abused, and victimized by non-consensual pornography generated through artificial intelligence,” the high schooler’s complaint noted. “Despite already being victimized by Defendant’s actions, Jane Doe has been forced to bring this action to protect herself and her rights because the governmental institutions that are supposed to protect women and children from being violated and exploited by the use of AI to generate child pornography and nonconsensual nude images failed to do so.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Nudify app’s plan to dominate deepfake porn hinges on Reddit, docs show Read More »

nj-teen-wins-fight-to-put-nudify-app-users-in-prison,-impose-fines-up-to-$30k

NJ teen wins fight to put nudify app users in prison, impose fines up to $30K


Here’s how one teen plans to fix schools failing kids affected by nudify apps.

When Francesca Mani was 14 years old, boys at her New Jersey high school used nudify apps to target her and other girls. At the time, adults did not seem to take the harassment seriously, telling her to move on after she demanded more severe consequences than just a single boy’s one or two-day suspension.

Mani refused to take adults’ advice, going over their heads to lawmakers who were more sensitive to her demands. And now, she’s won her fight to criminalize deepfakes. On Wednesday, New Jersey Governor Phil Murphy signed a law that he said would help victims “take a stand against deceptive and dangerous deepfakes” by making it a crime to create or share fake AI nudes of minors or non-consenting adults—as well as deepfakes seeking to meddle with elections or damage any individuals’ or corporations’ reputations.

Under the law, victims targeted by nudify apps like Mani can sue bad actors, collecting up to $1,000 per harmful image created either knowingly or recklessly. New Jersey hopes these “more severe consequences” will deter kids and adults from creating harmful images, as well as emphasize to schools—whose lax response to fake nudes has been heavily criticized—that AI-generated nude images depicting minors are illegal and must be taken seriously and reported to police. It imposes a maximum fine of $30,000 on anyone creating or sharing deepfakes for malicious purposes, as well as possible punitive damages if a victim can prove that images were created in willful defiance of the law.

Ars could not reach Mani for comment, but she celebrated the win in the governor’s press release, saying, “This victory belongs to every woman and teenager told nothing could be done, that it was impossible, and to just move on. It’s proof that with the right support, we can create change together.”

On LinkedIn, her mother, Dorota Mani—who has been working with the governor’s office on a commission to protect kids from online harms—thanked lawmakers like Murphy and former New Jersey Assemblyman Herb Conaway, who sponsored the law, for “standing with us.”

“When used maliciously, deepfake technology can dismantle lives, distort reality, and exploit the most vulnerable among us,” Conaway said. “I’m proud to have sponsored this legislation when I was still in the Assembly, as it will help us keep pace with advancing technology. This is about drawing a clear line between innovation and harm. It’s time we take a firm stand to protect individuals from digital deception, ensuring that AI serves to empower our communities.”

Doing nothing is no longer an option for schools, teen says

Around the country, as cases like Mani’s continue to pop up, experts expect that shame prevents most victims from coming forward to flag abuses, suspecting that the problem is much more widespread than media reports suggest.

Encode Justice has a tracker monitoring reported cases involving minors, including allowing victims to anonymously report harms around the US. But the true extent of the harm currently remains unknown, as cops warn of a flood of AI child sex images obscuring investigations into real-world child abuse.

Confronting this shadowy threat to kids everywhere, Mani was named as one of TIME’s most influential people in AI last year due to her advocacy fighting deepfakes. She’s not only pressured lawmakers to take strong action to protect vulnerable people, but she’s also pushed for change at tech companies and in schools nationwide.

“When that happened to me and my classmates, we had zero protection whatsoever,” Mani told TIME, and neither did other girls around the world who had been targeted and reached out to thank her for fighting for them. “There were so many girls from different states, different countries. And we all had three things in common: the lack of AI school policies, the lack of laws, and the disregard of consent.”

Yiota Souras, chief legal officer at the National Center for Missing and Exploited Children, told CBS News last year that protecting teens started with laws that criminalize sharing fake nudes and provide civil remedies, just as New Jersey’s law does. That way, “schools would have protocols,” she said, and “investigators and law enforcement would have roadmaps on how to investigate” and “what charges to bring.”

Clarity is urgently needed in schools, advocates say. At Mani’s school, the boys who shared the photos had their names shielded and were pulled out of class individually to be interrogated, but victims like Mani had no privacy whatsoever. Their names were blared over the school’s loud system, as boys mocked their tears in the hallway. To this day, it’s unclear who exactly shared and possibly still has copies of the images, which experts say could haunt Mani throughout her life. And the school’s inadequate response was a major reason why Mani decided to take a stand, seemingly viewing the school as a vehicle furthering her harassment.

“I realized I should stop crying and be mad, because this is unacceptable,” Mani told CBS News.

Mani pushed for NJ’s new law and claimed the win, but she thinks that change must start at schools, where the harassment starts. In her school district, the “harassment, intimidation and bullying” policy was updated to incorporate AI harms, but she thinks schools should go even further. Working with Encode Justice, she is helping to push a plan to fix schools failing kids targeted by nudify apps.

“My goal is to protect women and children—and we first need to start with AI school policies, because this is where most of the targeting is happening,” Mani told TIME.

Encode Justice did not respond to Ars’ request to comment. But their plan noted a common pattern in schools throughout the US. Students learn about nudify apps through ads on social media—such as Instagram reportedly driving 90 percent of traffic to one such nudify app—where they can also usually find innocuous photos of classmates to screenshot. Within seconds, the apps can nudify the screenshotted images, which Mani told CBS News then spread “rapid fire”  by text message and DMs, and often shared over school networks.

To end the abuse, schools need to be prepared, Encode Justice said, especially since “their initial response can sometimes exacerbate the situation.”

At Mani’s school, for example, leadership was criticized for announcing the victims’ names over the loudspeaker, which Encode Justice said never should have happened. Another misstep was at a California middle school, which delayed action for four months until parents went to police, Encode Justice said. In Texas, a school failed to stop images from spreading for eight months while a victim pleaded for help from administrators and police who failed to intervene. The longer the delays, the more victims will likely be targeted. In Pennsylvania, a single ninth grader targeted 46 girls before anyone stepped in.

Students deserve better, Mani feels, and Encode Justice’s plan recommends that all schools create action plans to stop failing students and respond promptly to stop image sharing.

That starts with updating policies to ban deepfake sexual imagery, then clearly communicating to students “the seriousness of the issue and the severity of the consequences.” Consequences should include identifying all perpetrators and issuing suspensions or expulsions on top of any legal consequences students face, Encode Justice suggested. They also recommend establishing “written procedures to discreetly inform relevant authorities about incidents and to support victims at the start of an investigation on deepfake sexual abuse.” And, critically, all teachers must be trained on these new policies.

“Doing nothing is no longer an option,” Mani said.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

NJ teen wins fight to put nudify app users in prison, impose fines up to $30K Read More »