deepfake porn

nudify-app’s-plan-to-dominate-deepfake-porn-hinges-on-reddit,-docs-show

Nudify app’s plan to dominate deepfake porn hinges on Reddit, docs show


Report: Clothoff ignored California’s lawsuit while buying up 10 rivals.

Clothoff—one of the leading apps used to quickly and cheaply make fake nudes from images of real people—reportedly is planning a global expansion to continue dominating deepfake porn online.

Also known as a nudify app, Clothoff has resisted attempts to unmask and confront its operators. Last August, the app was among those that San Francisco’s city attorney, David Chiu, sued in hopes of forcing a shutdown. But recently, a whistleblower—who had “access to internal company information” as a former Clothoff employee—told the investigative outlet Der Spiegel that the app’s operators “seem unimpressed by the lawsuit” and instead of worrying about shutting down have “bought up an entire network of nudify apps.”

Der Spiegel found evidence that Clothoff today owns at least 10 other nudify services, attracting “monthly views ranging between hundreds of thousands to several million.” The outlet granted the whistleblower anonymity to discuss the expansion plans, which the whistleblower claimed was motivated by Clothoff employees growing “cynical” and “obsessed with money” over time as the app—which once felt like an “exciting startup”—gained momentum. Because generating convincing fake nudes can cost just a few bucks, chasing profits seemingly relies on attracting as many repeat users to as many destinations as possible.

Currently, Clothoff runs on an annual budget of around $3.5 million, the whistleblower told Der Spiegel. It has shifted its marketing methods since its launch, apparently now largely relying on Telegram bots and X channels to target ads at young men likely to use their apps.

Der Spiegel’s report documents Clothoff’s “large-scale marketing plan” to expand into the German market, as revealed by the whistleblower. The alleged campaign hinges on producing “naked images of well-known influencers, singers, and actresses,” seeking to entice ad clicks with the tagline “you choose who you want to undress.”

A few of the stars named in the plan confirmed to Der Spiegel that they never agreed to this use of their likenesses, with some of their representatives suggesting that they would pursue legal action if the campaign is ever launched.

However, even celebrities like Taylor Swift have struggled to combat deepfake nudes spreading online, while tools like Clothoff are increasingly used to torment young girls in middle and high school.

Similar celebrity campaigns are planned for other markets, Der Spiegel reported, including British, French, and Spanish markets. And Clothoff has notably already become a go-to tool in the US, not only targeted in the San Francisco city attorney’s lawsuit, but also in a complaint raised by a high schooler in New Jersey suing a boy who used Clothoff to nudify one of her Instagram photos taken when she was 14 years old, then shared it with other boys on Snapchat.

Clothoff is seemingly hoping to entice more young boys worldwide to use its apps for such purposes. The whistleblower told Der Spiegel that most of Clothoff’s marketing budget goes toward “advertising posts in special Telegram channels, in sex subs on Reddit, and on 4chan.”

In ads, the app planned to specifically target “men between 16 and 35” who like benign stuff like “memes” and “video games,” as well as more toxic stuff like “right-wing extremist ideas,” “misogyny,” and “Andrew Tate,” an influencer criticized for promoting misogynistic views to teen boys.

Chiu was hoping to defend young women increasingly targeted in fake nudes by shutting down Clothoff, along with several other nudify apps targeted in his lawsuit. But so far, while Chiu has reached a settlement shutting down two websites, porngen.art and undresser.ai, attempts to serve Clothoff through available legal channels have not been successful, deputy press secretary for Chiu’s office, Alex Barrett-Shorter, told Ars.

Meanwhile, Clothoff continues to evolve, recently marketing a feature that Clothoff claims attracted more than a million users eager to make explicit videos out of a single picture.

Clothoff denies it plans to use influencers

Der Spiegel’s efforts to unmask the operators of Clothoff led the outlet to Eastern Europe, after reporters stumbled upon a “database accidentally left open on the Internet” that seemingly exposed “four central people behind the website.”

This was “consistent,” Der Spiegel said, with a whistleblower claim that all Clothoff employees “work in countries that used to belong to the Soviet Union.” Additionally, Der Spiegel noted that all Clothoff internal communications it reviewed were written in Russian, and the site’s email service is based in Russia.

A person claiming to be a Clothoff spokesperson named Elias denied knowing any of the four individuals flagged in their investigation, Der Spiegel reported, and disputed the $3 million budget figure. Elias claimed a nondisclosure agreement prevented him from discussing Clothoff’s team any further. However, soon after reaching out, Der Spiegel noted that Clothoff took down the database, which had a name that translated to “my babe.”

Regarding the shared marketing plan for global expansion, Elias denied that Clothoff intended to use celebrity influencers, saying that “Clothoff forbids the use of photos of people without their consent.”

He also denied that Clothoff could be used to nudify images of minors; however, one Clothoff user who spoke to Der Spiegel on the condition of anonymity, confirmed that his attempt to generate a fake nude of a US singer failed initially because she “looked like she might be underage.” But his second attempt a few days later successfully generated the fake nude with no problem. That suggests Clothoff’s age detection may not work perfectly.

As Clothoff’s growth appears unstoppable, the user explained to Der Spiegel why he doesn’t feel that conflicted about using the app to generate fake nudes of a famous singer.

“There are enough pictures of her on the Internet as it is,” the user reasoned.

However, that user draws the line at generating fake nudes of private individuals, insisting, “If I ever learned of someone producing such photos of my daughter, I would be horrified.”

For young boys who appear flippant about creating fake nude images of their classmates, the consequences have ranged from suspensions to juvenile criminal charges, and for some, there could be other costs. In the lawsuit where the high schooler is attempting to sue a boy who used Clothoff to bully her, there’s currently resistance from boys who participated in group chats to share what evidence they have on their phones. If she wins her fight, she’s asking for $150,000 in damages per image shared, so sharing chat logs could potentially increase the price tag.

Since she and the San Francisco city attorney each filed their lawsuits, the Take It Down Act has passed. That law makes it easier to force platforms to remove AI-generated fake nudes. But experts expect the law will face legal challenges over censorship fears, so the very limited legal tool might not withstand scrutiny.

Either way, the Take It Down Act is a safeguard that came too late for the earliest victims of nudify apps in the US, only some of whom are turning to courts seeking justice due to largely opaque laws that made it unclear if generating a fake nude was illegal.

“Jane Doe is one of many girls and women who have been and will continue to be exploited, abused, and victimized by non-consensual pornography generated through artificial intelligence,” the high schooler’s complaint noted. “Despite already being victimized by Defendant’s actions, Jane Doe has been forced to bring this action to protect herself and her rights because the governmental institutions that are supposed to protect women and children from being violated and exploited by the use of AI to generate child pornography and nonconsensual nude images failed to do so.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Nudify app’s plan to dominate deepfake porn hinges on Reddit, docs show Read More »

google-won’t-downrank-top-deepfake-porn-sites-unless-victims-mass-report

Google won’t downrank top deepfake porn sites unless victims mass report

Google won’t downrank top deepfake porn sites unless victims mass report

Today, Google announced new measures to combat the rapidly increasing spread of AI-generated non-consensual explicit deepfakes in its search results.

Because of “a concerning increase in generated images and videos that portray people in sexually explicit contexts, distributed on the web without their consent,” Google said that it consulted with “experts and victim-survivors” to make some “significant updates” to its widely used search engine to “further protect people.”

Specifically, Google made it easier for targets of fake explicit images—which experts have said are overwhelmingly women—to report and remove deepfakes that surface in search results. Additionally, Google took steps to downrank explicit deepfakes “to keep this type of content from appearing high up in Search results,” the world’s leading search engine said.

Victims of deepfake pornography have previously criticized Google for not being more proactive in its fight against deepfakes in search results. Surfacing images and reporting each one is a “time- and energy-draining process” and “constant battle,” Kaitlyn Siragusa, a Twitch gamer with an explicit OnlyFans frequently targeted by deepfakes, told Bloomberg last year.

In response, Google has worked to “make the process easier,” partly by “helping people address this issue at scale.” Now, when a victim submits a removal request, “Google’s systems will also aim to filter all explicit results on similar searches about them,” Google’s blog said. And once a deepfake is “successfully removed,” Google “will scan for—and remove—any duplicates of that image that we find,” the blog said.

Google’s efforts to downrank harmful fake content have also expanded, the tech giant said. To help individuals targeted by deepfakes, Google will now “lower explicit fake content for” searches that include people’s names. According to Google, this step alone has “reduced exposure to explicit image results on these types of queries by over 70 percent.”

However, Google still seems resistant to downranking general searches that might lead people to harmful content. A quick Google search confirms that general searches with keywords like “celebrity nude deepfake” point searchers to popular destinations where they can search for non-consensual intimate images of celebrities or request images of less famous people.

For victims, the bottom line is that problematic links will still appear in Google’s search results for anyone willing to keep scrolling or anyone intentionally searching for “deepfakes.” The only step Google has taken recently to downrank top deepfake sites like Fan-Topia or MrDeepFakes is a promise to demote “sites that have received a high volume of removals for fake explicit imagery.”

It’s currently unclear what Google considers a “high volume,” and Google declined Ars’ request to comment on whether these sites would be downranked eventually. Instead, a Google spokesperson told Ars that “if we receive a high volume of successful removal sites from a specific website under this policy, we will use that as a ranking signal and demote the site in question for queries where the site might surface.”

Currently, Google’s spokesperson said, Google is focused on downranking “queries that include the names of individuals,” which “have the highest potential for individual harm.” But more queries will be downranked in the coming months, Google’s spokesperson said, and Google continues to tackle the “technical challenge for search engines” of differentiating between “explicit content that’s real and consensual (like an actor’s nude scenes)” and “explicit fake content (like deepfakes featuring said actor),” Google’s blog said.

“This is an ongoing effort, and we have additional improvements coming over the next few months to address a broader range of queries,” Google’s spokesperson told Ars.

Deepfake trauma “never ends”

In its blog, Google said that “these efforts are designed to give people added peace of mind, especially if they’re concerned about similar content about them popping up in the future.”

But many deepfake victims have claimed that dedicating hours or even months to removing harmful content doesn’t provide any hope that the images won’t resurface. Most recently, one deepfake victim, Sabrina Javellana, told The New York Times that even after her home state Florida passed a law against deepfakes, that didn’t stop the fake images from spreading online.

She’s given up on trying to get the images removed anywhere, telling The Times, “It just never ends. I just have to accept it.”

According to US Representative Joseph Morelle (D-NY), it will take a federal law against deepfakes to deter more bad actors from harassing and terrorizing women with deepfake porn. He’s introduced one such law, the Preventing Deepfakes of Intimate Images Act, which would criminalize creating deepfakes. It currently has 59 sponsors in the House and bipartisan support in the Senate, Morelle said on a panel this week discussing harms of deepfakes, which Ars attended.

Morelle said he’d spoken to victims of deepfakes, including teenagers, and decided that “a national ban and a national set of both criminal and civil remedies makes the most sense” to combat the problem with “urgency.”

“A patchwork of different state and local jurisdictions with different rules” would be “really hard to follow” for both victims and perpetrators trying to understand what’s legal, Morelle said, whereas federal laws that impose a liability and criminal penalty would likely have “the greatest impact.”

Victims, Morelle said, every day suffer from mental, physical, emotional, and financial harms, and as a co-panelist, Andrea Powell, pointed out, there is no healing because there is currently no justice for survivors during a period of “prolific and catastrophic increase in this abuse,” Powell warned.

Google won’t downrank top deepfake porn sites unless victims mass report Read More »

x-can’t-stop-spread-of-explicit,-fake-ai-taylor-swift-images

X can’t stop spread of explicit, fake AI Taylor Swift images

Escalating the situation —

Will Swifties’ war on AI fakes spark a deepfake porn reckoning?

X can’t stop spread of explicit, fake AI Taylor Swift images

Explicit, fake AI-generated images sexualizing Taylor Swift began circulating online this week, quickly sparking mass outrage that may finally force a mainstream reckoning with harms caused by spreading non-consensual deepfake pornography.

A wide variety of deepfakes targeting Swift began spreading on X, the platform formerly known as Twitter, yesterday.

Ars found that some posts have been removed, while others remain online, as of this writing. One X post was viewed more than 45 million times over approximately 17 hours before it was removed, The Verge reported. Seemingly fueling more spread, X promoted these posts under the trending topic “Taylor Swift AI” in some regions, The Verge reported.

The Verge noted that since these images started spreading, “a deluge of new graphic fakes have since appeared.” According to Fast Company, these harmful images were posted on X but soon spread to other platforms, including Reddit, Facebook, and Instagram. Some platforms, like X, ban sharing of AI-generated images but seem to struggle with detecting banned content before it becomes widely viewed.

Ars’ AI reporter Benj Edwards warned in 2022 that AI image-generation technology was rapidly advancing, making it easy to train an AI model on just a handful of photos before it could be used to create fake but convincing images of that person in infinite quantities. That is seemingly what happened to Swift, and it’s currently unknown how many different non-consensual deepfakes have been generated or how widely those images have spread.

It’s also unknown what consequences have resulted from spreading the images. At least one verified X user had their account suspended after sharing fake images of Swift, The Verge reported, but Ars reviewed posts on X from Swift fans targeting others who allegedly shared images whose accounts remain active. Swift fans also have been uploading countless favorite photos of Swift to bury the harmful images and prevent them from appearing in various X searches. Her fans seem dedicated to reducing the spread however they can, with some posting different addresses, seemingly in attempts to dox an X user who, they’ve alleged, is the initial source of the images.

Neither X nor Swift’s team has yet commented on the deepfakes, but it seems clear that solving the problem will require more than just requesting removals from social media platforms. The AI model trained on Swift’s images is likely still out there, likely procured through one of the known websites that specialize in making fine-tuned celebrity AI models. As long as the model exists, anyone with access could crank out as many new images as they wanted, making it hard for even someone with Swift’s resources to make the problem go away for good.

In that way, Swift’s predicament might raise awareness of why creating and sharing non-consensual deepfake pornography is harmful, perhaps moving the culture away from persistent notions that nobody is harmed by non-consensual AI-generated fakes.

Swift’s plight could also inspire regulators to act faster to combat non-consensual deepfake porn. Last year, she inspired a Senate hearing after a Live Nation scandal frustrated her fans, triggering lawmakers’ antitrust concerns about the leading ticket seller, The New York Times reported.

Some lawmakers are already working to combat deepfake porn. Congressman Joe Morelle (D-NY) proposed a law criminalizing deepfake porn earlier this year after teen boys at a New Jersey high school used AI image generators to create and share non-consensual fake nude images of female classmates. Under that proposed law, anyone sharing deepfake pornography without an individual’s consent risks fines and being imprisoned for up to two years. Damages could go as high as $150,000 and imprisonment for as long as 10 years if sharing the images facilitates violence or impacts the proceedings of a government agency.

Elsewhere, the UK’s Online Safety Act restricts any illegal content from being shared on platforms, including deepfake pornography. It requires moderation, or companies will risk fines worth more than $20 million, or 10 percent of their global annual turnover, whichever amount is higher.

The UK law, however, is controversial because it requires companies to scan private messages for illegal content. That makes it practically impossible for platforms to provide end-to-end encryption, which the American Civil Liberties Union has described as vital for user privacy and security.

As regulators tangle with legal questions and social media users with moral ones, some AI image generators have moved to limit models from producing NSFW outputs. Some did this by removing some of the large quantity of sexualized images in the models’ training data, such as Stability AI, the company behind Stable Diffusion. Others, like Microsoft’s Bing image creator, make it easy for users to report NSFW outputs.

But so far, keeping up with reports of deepfake porn seems to fall squarely on social media platforms’ shoulders. Swift’s battle this week shows how unprepared even the biggest platforms currently are to handle blitzes of harmful images seemingly uploaded faster than they can be removed.

X can’t stop spread of explicit, fake AI Taylor Swift images Read More »

sharing-deepfake-porn-could-lead-to-lengthy-prison-time-under-proposed-law

Sharing deepfake porn could lead to lengthy prison time under proposed law

Fake nudes, real harms —

Teen “shouting for change” after fake nude images spread at NJ high school.

Sharing deepfake porn could lead to lengthy prison time under proposed law

The US seems to be getting serious about criminalizing deepfake pornography after teen boys at a New Jersey high school used AI image generators to create and share non-consensual fake nude images of female classmates last October.

On Tuesday, Rep. Joseph Morelle (D-NY) announced that he has re-introduced the “Preventing Deepfakes of Intimate Images Act,” which seeks to “prohibit the non-consensual disclosure of digitally altered intimate images.” Under the proposed law, anyone sharing deepfake pornography without an individual’s consent risks damages that could go as high as $150,000 and imprisonment of up to 10 years if sharing the images facilitates violence or impacts the proceedings of a government agency.

The hope is that steep penalties will deter companies and individuals from allowing the disturbing images to be spread. It creates a criminal offense for sharing deepfake pornography “with the intent to harass, annoy, threaten, alarm, or cause substantial harm to the finances or reputation of the depicted individual” or with “reckless disregard” or “actual knowledge” that images will harm the individual depicted. It also provides a path for victims to sue offenders in civil court.

Rep. Tom Kean (R-NJ), who co-sponsored the bill, said that “proper guardrails and transparency are essential for fostering a sense of responsibility among AI companies and individuals using AI.”

“Try to imagine the horror of receiving intimate images looking exactly like you—or your daughter, or your wife, or your sister—and you can’t prove it’s not,” Morelle said. “Deepfake pornography is sexual exploitation, it’s abusive, and I’m astounded it is not already a federal crime.”

Joining Morelle in pushing to criminalize deepfake pornography was Dorota and Francesca Mani, who have spent the past two months meeting with lawmakers, The Wall Street Journal reported. The mother and daughter experienced the horror Morelle described firsthand when the New Jersey high school confirmed that 14-year-old Francesca was among the students targeted last year.

“What happened to me and my classmates was not cool, and there’s no way I’m just going to shrug and let it slide,” Francesca said. “I’m here, standing up and shouting for change, fighting for laws, so no one else has to feel as lost and powerless as I did on October 20th.”

Morelle’s office told Ars that “advocacy from partners like the Mani family” is “critical to bringing attention to this issue” and getting the proposed law “to the floor for a vote.”

Morelle introduced the law in December 2022, but it failed to pass that year or in 2023. He’s re-introducing the law in 2024 after seemingly gaining more support during a House Oversight subcommittee hearing on “Advances in Deepfake Technology” last November.

At that hearing, many lawmakers warned of the dangers of AI-generated deepfakes, citing a study from the Dutch AI company Sensity, which found that 96 percent of deepfakes online are deepfake porn—the majority of which targets women.

But lawmakers also made clear that it’s currently hard to detect AI-generated images and distinguish them from real images.

According to a hearing transcript posted by the nonprofit news organization Tech Policy Press, David Doermann—currently interim chair of the University at Buffalo’s computer science and engineering department and former program manager at the Defense Advanced Research Projects Agency (DARPA)—told lawmakers that DARPA was already working on advanced deepfake detection tools but still had more work to do.

To support laws like Morelle’s, lawmakers have called for more funding for DARPA and the National Science Foundation to aid in ongoing efforts to create effective detection tools. At the same time, President Joe Biden—through a sweeping AI executive order—has pushed for solutions like watermarking deepfakes. Biden’s executive order also instructed the Department of Commerce to establish “standards and best practices for detecting AI-generated content and authenticating official content.”

Morelle is working to push his law through in 2024, warning that deepfake pornography is already affecting a “generation of young women like Francesca,” who are “ready to stand up against systemic oppression and stand in their power.”

Until the federal government figures out how to best prevent the sharing of AI-generated deepfakes, Francesca and her mom plan to keep pushing for change.

“Our voices are our secret weapon, and our words are like power-ups in Fortnite,” Francesca said. “My mom and I are advocating to create a world where being safe isn’t just a hope; it’s a reality for everyone.”

Sharing deepfake porn could lead to lengthy prison time under proposed law Read More »

report:-deepfake-porn-consistently-found-atop-google,-bing-search-results

Report: Deepfake porn consistently found atop Google, Bing search results

Shocking results —

Google vows to create more safeguards to protect victims of deepfake porn.

Report: Deepfake porn consistently found atop Google, Bing search results

Popular search engines like Google and Bing are making it easy to surface nonconsensual deepfake pornography by placing it at the top of search results, NBC News reported Thursday.

These controversial deepfakes superimpose faces of real women, often celebrities, onto the bodies of adult entertainers to make them appear to be engaging in real sex. Thanks in part to advances in generative AI, there is now a burgeoning black market for deepfake porn that could be discovered through a Google search, NBC News previously reported.

NBC News uncovered the problem by turning off safe search, then combining the names of 36 female celebrities with obvious search terms like “deepfakes,” “deepfake porn,” and “fake nudes.” Bing generated links to deepfake videos in top results 35 times, while Google did so 34 times. Bing also surfaced “fake nude photos of former teen Disney Channel female actors” using images where actors appear to be underaged.

A Google spokesperson told NBC that the tech giant understands “how distressing this content can be for people affected by it” and is “actively working to bring more protections to Search.”

According to Google’s spokesperson, this controversial content sometimes appears because “Google indexes content that exists on the web,” just “like any search engine.” But while searches using terms like “deepfake” may generate results consistently, Google “actively” designs “ranking systems to avoid shocking people with unexpected harmful or explicit content that they aren’t looking for,” the spokesperson said.

Currently, the only way to remove nonconsensual deepfake porn from Google search results is for the victim to submit a form personally or through an “authorized representative.” That form requires victims to meet three requirements: showing that they’re “identifiably depicted” in the deepfake; the “imagery in question is fake and falsely depicts” them as “nude or in a sexually explicit situation”; and the imagery was distributed without their consent.

While this gives victims some course of action to remove content, experts are concerned that search engines need to do more to effectively reduce the prevalence of deepfake pornography available online—which right now is rising at a rapid rate.

This emerging issue increasingly affects average people and even children, not just celebrities. Last June, child safety experts discovered thousands of realistic but fake AI child sex images being traded online, around the same time that the FBI warned that the use of AI-generated deepfakes in sextortion schemes was increasing.

And nonconsensual deepfake porn isn’t just being traded in black markets online. In November, New Jersey police launched a probe after high school teens used AI image generators to create and share fake nude photos of female classmates.

With tech companies seemingly slow to stop the rise in deepfakes, some states have passed laws criminalizing deepfake porn distribution. Last July, Virginia amended its existing law criminalizing revenge porn to include any “falsely created videographic or still image.” In October, New York passed a law specifically focused on banning deepfake porn, imposing a $1,000 fine and up to a year of jail time on violators. Congress has also introduced legislation that creates criminal penalties for spreading deepfake porn.

Although Google told NBC News that its search features “don’t allow manipulated media or sexually explicit content,” the outlet’s investigation seemingly found otherwise. NBC News also noted that Google’s Play app store hosts an app that was previously marketed for creating deepfake porn, despite prohibiting “apps determined to promote or perpetuate demonstrably misleading or deceptive imagery, videos and/or text.” This suggests that Google’s remediation efforts blocking deceptive imagery may be inconsistent.

Google told Ars that it will soon be strengthening its policies against apps featuring AI-generated restricted content in the Play Store. A generative AI policy taking effect on January 31 will require all apps to comply with developer policies that ban AI-generated restricted content, including deceptive content and content that facilitates the exploitation or abuse of children.

Experts told NBC News that “Google’s lack of proactive patrolling for abuse has made it and other search engines useful platforms for people looking to engage in deepfake harassment campaigns.”

Google is currently “in the process of building more expansive safeguards, with a particular focus on removing the need for known victims to request content removals one by one,” Google’s spokesperson told NBC News.

Microsoft’s spokesperson told Ars that Microsoft updated its process for reporting concerns with Bing searches to include non-consensual intimate imagery (NCII) used in “deepfakes” last August because it had become a “significant concern.” Like Google, Microsoft allows victims to report NCII deepfakes by submitting a web form to request removal from search results, understanding that any sharing of NCII is “a gross violation of personal privacy and dignity with devastating effects for victims.”

In the past, Microsoft President Brad Smith has said that among all dangers that AI poses, deepfakes worry him most, but deepfakes fueling “foreign cyber influence operations” seemingly concern him more than deepfake porn.

This story was updated on January 11 to include information on Google’s AI-generated content policy and on January 12 to include information from Microsoft.

Report: Deepfake porn consistently found atop Google, Bing search results Read More »