non-consensual intimate imagery

x-ignores-revenge-porn-takedown-requests-unless-dmca-is-used,-study-says

X ignores revenge porn takedown requests unless DMCA is used, study says

Why did the study target X?

The University of Michigan research team worried that their experiment posting AI-generated NCII on X may cross ethical lines.

They chose to conduct the study on X because they deduced it was “a platform where there would be no volunteer moderators and little impact on paid moderators, if any” viewed their AI-generated nude images.

X’s transparency report seems to suggest that most reported non-consensual nudity is actioned by human moderators, but researchers reported that their flagged content was never actioned without a DMCA takedown.

Since AI image generators are trained on real photos, researchers also took steps to ensure that AI-generated NCII in the study did not re-traumatize victims or depict real people who might stumble on the images on X.

“Each image was tested against a facial-recognition software platform and several reverse-image lookup services to verify it did not resemble any existing individual,” the study said. “Only images confirmed by all platforms to have no resemblance to individuals were selected for the study.”

These more “ethical” images were posted on X using popular hashtags like #porn, #hot, and #xxx, but their reach was limited to evade potential harm, researchers said.

“Our study may contribute to greater transparency in content moderation processes” related to NCII “and may prompt social media companies to invest additional efforts to combat deepfake” NCII, researchers said. “In the long run, we believe the benefits of this study far outweigh the risks.”

According to the researchers, X was given time to automatically detect and remove the content but failed to do so. It’s possible, the study suggested, that X’s decision to allow explicit content starting in June made it harder to detect NCII, as some experts had predicted.

To fix the problem, researchers suggested that both “greater platform accountability” and “legal mechanisms to ensure that accountability” are needed—as is much more research on other platforms’ mechanisms for removing NCII.

“A dedicated” NCII law “must clearly define victim-survivor rights and impose legal obligations on platforms to act swiftly in removing harmful content,” the study concluded.

X ignores revenge porn takedown requests unless DMCA is used, study says Read More »

popular-ai-“nudify”-sites-sued-amid-shocking-rise-in-victims-globally

Popular AI “nudify” sites sued amid shocking rise in victims globally

Popular AI “nudify” sites sued amid shocking rise in victims globally

San Francisco’s city attorney David Chiu is suing to shut down 16 of the most popular websites and apps allowing users to “nudify” or “undress” photos of mostly women and girls who have been increasingly harassed and exploited by bad actors online.

These sites, Chiu’s suit claimed, are “intentionally” designed to “create fake, nude images of women and girls without their consent,” boasting that any users can upload any photo to “see anyone naked” by using tech that realistically swaps the faces of real victims onto AI-generated explicit images.

“In California and across the country, there has been a stark increase in the number of women and girls harassed and victimized by AI-generated” non-consensual intimate imagery (NCII) and “this distressing trend shows no sign of abating,” Chiu’s suit said.

“Given the widespread availability and popularity” of nudify websites, “San Franciscans and Californians face the threat that they or their loved ones may be victimized in this manner,” Chiu’s suit warned.

In a press conference, Chiu said that this “first-of-its-kind lawsuit” has been raised to defend not just Californians, but “a shocking number of women and girls across the globe”—from celebrities like Taylor Swift to middle and high school girls. Should the city official win, each nudify site risks fines of $2,500 for each violation of California consumer protection law found.

On top of media reports sounding alarms about the AI-generated harm, law enforcement has joined the call to ban so-called deepfakes.

Chiu said the harmful deepfakes are often created “by exploiting open-source AI image generation models,” such as earlier versions of Stable Diffusion, that can be honed or “fine-tuned” to easily “undress” photos of women and girls that are frequently yanked from social media. While later versions of Stable Diffusion make such “disturbing” forms of misuse much harder, San Francisco city officials noted at the press conference that fine-tunable earlier versions of Stable Diffusion are still widely available to be abused by bad actors.

In the US alone, cops are currently so bogged down by reports of fake AI child sex images that it’s making it hard to investigate child abuse cases offline, and these AI cases are expected to continue spiking “exponentially.” The AI abuse has spread so widely that “the FBI has warned of an uptick in extortion schemes using AI generated non-consensual pornography,” Chiu said at the press conference. “And the impact on victims has been devastating,” harming “their reputations and their mental health,” causing “loss of autonomy,” and “in some instances causing individuals to become suicidal.”

Suing on behalf of the people of the state of California, Chiu is seeking an injunction requiring nudify site owners to cease operation of “all websites they own or operate that are capable of creating AI-generated” non-consensual intimate imagery of identifiable individuals. It’s the only way, Chiu said, to hold these sites “accountable for creating and distributing AI-generated NCII of women and girls and for aiding and abetting others in perpetrating this conduct.”

He also wants an order requiring “any domain-name registrars, domain-name registries, webhosts, payment processors, or companies providing user authentication and authorization services or interfaces” to “restrain” nudify site operators from launching new sites to prevent any further misconduct.

Chiu’s suit redacts the names of the most harmful sites his investigation uncovered but claims that in the first six months of 2024, the sites “have been visited over 200 million times.”

While victims typically have little legal recourse, Chiu believes that state and federal laws prohibiting deepfake pornography, revenge pornography, and child pornography, as well as California’s unfair competition law, can be wielded to take down all 16 sites. Chiu expects that a win will serve as a warning to other nudify site operators that more takedowns are likely coming.

“We are bringing this lawsuit to get these websites shut down, but we also want to sound the alarm,” Chiu said at the press conference. “Generative AI has enormous promise, but as with all new technologies, there are unanticipated consequences and criminals seeking to exploit them. We must be clear that this is not innovation. This is sexual abuse.”

Popular AI “nudify” sites sued amid shocking rise in victims globally Read More »

google-won’t-downrank-top-deepfake-porn-sites-unless-victims-mass-report

Google won’t downrank top deepfake porn sites unless victims mass report

Google won’t downrank top deepfake porn sites unless victims mass report

Today, Google announced new measures to combat the rapidly increasing spread of AI-generated non-consensual explicit deepfakes in its search results.

Because of “a concerning increase in generated images and videos that portray people in sexually explicit contexts, distributed on the web without their consent,” Google said that it consulted with “experts and victim-survivors” to make some “significant updates” to its widely used search engine to “further protect people.”

Specifically, Google made it easier for targets of fake explicit images—which experts have said are overwhelmingly women—to report and remove deepfakes that surface in search results. Additionally, Google took steps to downrank explicit deepfakes “to keep this type of content from appearing high up in Search results,” the world’s leading search engine said.

Victims of deepfake pornography have previously criticized Google for not being more proactive in its fight against deepfakes in search results. Surfacing images and reporting each one is a “time- and energy-draining process” and “constant battle,” Kaitlyn Siragusa, a Twitch gamer with an explicit OnlyFans frequently targeted by deepfakes, told Bloomberg last year.

In response, Google has worked to “make the process easier,” partly by “helping people address this issue at scale.” Now, when a victim submits a removal request, “Google’s systems will also aim to filter all explicit results on similar searches about them,” Google’s blog said. And once a deepfake is “successfully removed,” Google “will scan for—and remove—any duplicates of that image that we find,” the blog said.

Google’s efforts to downrank harmful fake content have also expanded, the tech giant said. To help individuals targeted by deepfakes, Google will now “lower explicit fake content for” searches that include people’s names. According to Google, this step alone has “reduced exposure to explicit image results on these types of queries by over 70 percent.”

However, Google still seems resistant to downranking general searches that might lead people to harmful content. A quick Google search confirms that general searches with keywords like “celebrity nude deepfake” point searchers to popular destinations where they can search for non-consensual intimate images of celebrities or request images of less famous people.

For victims, the bottom line is that problematic links will still appear in Google’s search results for anyone willing to keep scrolling or anyone intentionally searching for “deepfakes.” The only step Google has taken recently to downrank top deepfake sites like Fan-Topia or MrDeepFakes is a promise to demote “sites that have received a high volume of removals for fake explicit imagery.”

It’s currently unclear what Google considers a “high volume,” and Google declined Ars’ request to comment on whether these sites would be downranked eventually. Instead, a Google spokesperson told Ars that “if we receive a high volume of successful removal sites from a specific website under this policy, we will use that as a ranking signal and demote the site in question for queries where the site might surface.”

Currently, Google’s spokesperson said, Google is focused on downranking “queries that include the names of individuals,” which “have the highest potential for individual harm.” But more queries will be downranked in the coming months, Google’s spokesperson said, and Google continues to tackle the “technical challenge for search engines” of differentiating between “explicit content that’s real and consensual (like an actor’s nude scenes)” and “explicit fake content (like deepfakes featuring said actor),” Google’s blog said.

“This is an ongoing effort, and we have additional improvements coming over the next few months to address a broader range of queries,” Google’s spokesperson told Ars.

Deepfake trauma “never ends”

In its blog, Google said that “these efforts are designed to give people added peace of mind, especially if they’re concerned about similar content about them popping up in the future.”

But many deepfake victims have claimed that dedicating hours or even months to removing harmful content doesn’t provide any hope that the images won’t resurface. Most recently, one deepfake victim, Sabrina Javellana, told The New York Times that even after her home state Florida passed a law against deepfakes, that didn’t stop the fake images from spreading online.

She’s given up on trying to get the images removed anywhere, telling The Times, “It just never ends. I just have to accept it.”

According to US Representative Joseph Morelle (D-NY), it will take a federal law against deepfakes to deter more bad actors from harassing and terrorizing women with deepfake porn. He’s introduced one such law, the Preventing Deepfakes of Intimate Images Act, which would criminalize creating deepfakes. It currently has 59 sponsors in the House and bipartisan support in the Senate, Morelle said on a panel this week discussing harms of deepfakes, which Ars attended.

Morelle said he’d spoken to victims of deepfakes, including teenagers, and decided that “a national ban and a national set of both criminal and civil remedies makes the most sense” to combat the problem with “urgency.”

“A patchwork of different state and local jurisdictions with different rules” would be “really hard to follow” for both victims and perpetrators trying to understand what’s legal, Morelle said, whereas federal laws that impose a liability and criminal penalty would likely have “the greatest impact.”

Victims, Morelle said, every day suffer from mental, physical, emotional, and financial harms, and as a co-panelist, Andrea Powell, pointed out, there is no healing because there is currently no justice for survivors during a period of “prolific and catastrophic increase in this abuse,” Powell warned.

Google won’t downrank top deepfake porn sites unless victims mass report Read More »

butts,-breasts,-and-genitals-now-explicitly-allowed-on-elon-musk’s-x

Butts, breasts, and genitals now explicitly allowed on Elon Musk’s X

Butts, breasts, and genitals now explicitly allowed on Elon Musk’s X

Aurich Lawson | Getty Images

Adult content has always proliferated on Twitter, but the platform now called X recently clarified its policy to officially allow “consensually produced and distributed adult nudity or sexual behavior.”

X’s rules seem simple. As long as content is “properly labeled and not prominently displayed,” users can share material—including AI-generated or animated content—”that is pornographic or intended to cause sexual arousal.”

“We believe that users should be able to create, distribute, and consume material related to sexual themes as long as it is consensually produced and distributed,” X’s policy said.

The policy update seemingly reflects X’s core mission to defend all legal speech. It protects a wide range of sexual expression, including depictions of explicit or implicit sexual behavior, simulated sexual intercourse, full or partial nudity, and close-ups of genitals, buttocks, or breasts.

“Sexual expression, whether visual or written, can be a legitimate form of artistic expression,” X’s policy said. “We believe in the autonomy of adults to engage with and create content that reflects their own beliefs, desires, and experiences, including those related to sexuality.”

Today, X Support promoted the update on X, confirming that “we have launched Adult Content and Violent Content policies to bring more clarity of our Rules and transparency into enforcement of these areas. These policies replace our former Sensitive Media and Violent Speech policies—but what we enforce against hasn’t changed.”

Seemingly also unchanged: none of this content can be monetized, as X’s ad policy says that “to ensure a positive user experience and a healthy conversation on the platform, X prohibits the promotion of adult sexual content globally.”

Under the policy, adult content is also prohibited from appearing in live videos, profile pictures, headers, list banners, or community cover photos.

X has been toying with the idea of fully embracing adult content and has even planned a feature for adult creators that could position X as an OnlyFans rival. That plan was delayed, Platformer reported in 2022, after red-teaming flagged a seemingly insurmountable obstacle to the launch: “Twitter cannot accurately detect child sexual exploitation and non-consensual nudity at scale.”

The new adult content policy still emphasizes that non-consensual adult content is prohibited, but it’s unclear if the platform has gotten any better at distinguishing between consensually produced content and nonconsensual material. X did not immediately respond to Ars’ request to comment.

For adult content to be allowed on the platform, X now requires content warnings so that “users who do not wish to see it can avoid it” and “children below the age of 18 are not exposed to it.”

Users who plan to regularly post adult content can adjust their account’s media settings to place a label on all their images and videos. That results in a content warning for any visitor of that account’s profile, except for “people who have opted in to see possibly sensitive content,” who “will still see your account without the message.”

Users who only occasionally share adult content can choose to avoid the account label and instead edit an image or video to add a one-time label to any individual post, flagging just that post as sensitive.

Once a label is applied, any users under 18 will be blocked from viewing the post, X said.

Butts, breasts, and genitals now explicitly allowed on Elon Musk’s X Read More »