deepfake porn

x-can’t-stop-spread-of-explicit,-fake-ai-taylor-swift-images

X can’t stop spread of explicit, fake AI Taylor Swift images

Escalating the situation —

Will Swifties’ war on AI fakes spark a deepfake porn reckoning?

X can’t stop spread of explicit, fake AI Taylor Swift images

Explicit, fake AI-generated images sexualizing Taylor Swift began circulating online this week, quickly sparking mass outrage that may finally force a mainstream reckoning with harms caused by spreading non-consensual deepfake pornography.

A wide variety of deepfakes targeting Swift began spreading on X, the platform formerly known as Twitter, yesterday.

Ars found that some posts have been removed, while others remain online, as of this writing. One X post was viewed more than 45 million times over approximately 17 hours before it was removed, The Verge reported. Seemingly fueling more spread, X promoted these posts under the trending topic “Taylor Swift AI” in some regions, The Verge reported.

The Verge noted that since these images started spreading, “a deluge of new graphic fakes have since appeared.” According to Fast Company, these harmful images were posted on X but soon spread to other platforms, including Reddit, Facebook, and Instagram. Some platforms, like X, ban sharing of AI-generated images but seem to struggle with detecting banned content before it becomes widely viewed.

Ars’ AI reporter Benj Edwards warned in 2022 that AI image-generation technology was rapidly advancing, making it easy to train an AI model on just a handful of photos before it could be used to create fake but convincing images of that person in infinite quantities. That is seemingly what happened to Swift, and it’s currently unknown how many different non-consensual deepfakes have been generated or how widely those images have spread.

It’s also unknown what consequences have resulted from spreading the images. At least one verified X user had their account suspended after sharing fake images of Swift, The Verge reported, but Ars reviewed posts on X from Swift fans targeting others who allegedly shared images whose accounts remain active. Swift fans also have been uploading countless favorite photos of Swift to bury the harmful images and prevent them from appearing in various X searches. Her fans seem dedicated to reducing the spread however they can, with some posting different addresses, seemingly in attempts to dox an X user who, they’ve alleged, is the initial source of the images.

Neither X nor Swift’s team has yet commented on the deepfakes, but it seems clear that solving the problem will require more than just requesting removals from social media platforms. The AI model trained on Swift’s images is likely still out there, likely procured through one of the known websites that specialize in making fine-tuned celebrity AI models. As long as the model exists, anyone with access could crank out as many new images as they wanted, making it hard for even someone with Swift’s resources to make the problem go away for good.

In that way, Swift’s predicament might raise awareness of why creating and sharing non-consensual deepfake pornography is harmful, perhaps moving the culture away from persistent notions that nobody is harmed by non-consensual AI-generated fakes.

Swift’s plight could also inspire regulators to act faster to combat non-consensual deepfake porn. Last year, she inspired a Senate hearing after a Live Nation scandal frustrated her fans, triggering lawmakers’ antitrust concerns about the leading ticket seller, The New York Times reported.

Some lawmakers are already working to combat deepfake porn. Congressman Joe Morelle (D-NY) proposed a law criminalizing deepfake porn earlier this year after teen boys at a New Jersey high school used AI image generators to create and share non-consensual fake nude images of female classmates. Under that proposed law, anyone sharing deepfake pornography without an individual’s consent risks fines and being imprisoned for up to two years. Damages could go as high as $150,000 and imprisonment for as long as 10 years if sharing the images facilitates violence or impacts the proceedings of a government agency.

Elsewhere, the UK’s Online Safety Act restricts any illegal content from being shared on platforms, including deepfake pornography. It requires moderation, or companies will risk fines worth more than $20 million, or 10 percent of their global annual turnover, whichever amount is higher.

The UK law, however, is controversial because it requires companies to scan private messages for illegal content. That makes it practically impossible for platforms to provide end-to-end encryption, which the American Civil Liberties Union has described as vital for user privacy and security.

As regulators tangle with legal questions and social media users with moral ones, some AI image generators have moved to limit models from producing NSFW outputs. Some did this by removing some of the large quantity of sexualized images in the models’ training data, such as Stability AI, the company behind Stable Diffusion. Others, like Microsoft’s Bing image creator, make it easy for users to report NSFW outputs.

But so far, keeping up with reports of deepfake porn seems to fall squarely on social media platforms’ shoulders. Swift’s battle this week shows how unprepared even the biggest platforms currently are to handle blitzes of harmful images seemingly uploaded faster than they can be removed.

X can’t stop spread of explicit, fake AI Taylor Swift images Read More »

sharing-deepfake-porn-could-lead-to-lengthy-prison-time-under-proposed-law

Sharing deepfake porn could lead to lengthy prison time under proposed law

Fake nudes, real harms —

Teen “shouting for change” after fake nude images spread at NJ high school.

Sharing deepfake porn could lead to lengthy prison time under proposed law

The US seems to be getting serious about criminalizing deepfake pornography after teen boys at a New Jersey high school used AI image generators to create and share non-consensual fake nude images of female classmates last October.

On Tuesday, Rep. Joseph Morelle (D-NY) announced that he has re-introduced the “Preventing Deepfakes of Intimate Images Act,” which seeks to “prohibit the non-consensual disclosure of digitally altered intimate images.” Under the proposed law, anyone sharing deepfake pornography without an individual’s consent risks damages that could go as high as $150,000 and imprisonment of up to 10 years if sharing the images facilitates violence or impacts the proceedings of a government agency.

The hope is that steep penalties will deter companies and individuals from allowing the disturbing images to be spread. It creates a criminal offense for sharing deepfake pornography “with the intent to harass, annoy, threaten, alarm, or cause substantial harm to the finances or reputation of the depicted individual” or with “reckless disregard” or “actual knowledge” that images will harm the individual depicted. It also provides a path for victims to sue offenders in civil court.

Rep. Tom Kean (R-NJ), who co-sponsored the bill, said that “proper guardrails and transparency are essential for fostering a sense of responsibility among AI companies and individuals using AI.”

“Try to imagine the horror of receiving intimate images looking exactly like you—or your daughter, or your wife, or your sister—and you can’t prove it’s not,” Morelle said. “Deepfake pornography is sexual exploitation, it’s abusive, and I’m astounded it is not already a federal crime.”

Joining Morelle in pushing to criminalize deepfake pornography was Dorota and Francesca Mani, who have spent the past two months meeting with lawmakers, The Wall Street Journal reported. The mother and daughter experienced the horror Morelle described firsthand when the New Jersey high school confirmed that 14-year-old Francesca was among the students targeted last year.

“What happened to me and my classmates was not cool, and there’s no way I’m just going to shrug and let it slide,” Francesca said. “I’m here, standing up and shouting for change, fighting for laws, so no one else has to feel as lost and powerless as I did on October 20th.”

Morelle’s office told Ars that “advocacy from partners like the Mani family” is “critical to bringing attention to this issue” and getting the proposed law “to the floor for a vote.”

Morelle introduced the law in December 2022, but it failed to pass that year or in 2023. He’s re-introducing the law in 2024 after seemingly gaining more support during a House Oversight subcommittee hearing on “Advances in Deepfake Technology” last November.

At that hearing, many lawmakers warned of the dangers of AI-generated deepfakes, citing a study from the Dutch AI company Sensity, which found that 96 percent of deepfakes online are deepfake porn—the majority of which targets women.

But lawmakers also made clear that it’s currently hard to detect AI-generated images and distinguish them from real images.

According to a hearing transcript posted by the nonprofit news organization Tech Policy Press, David Doermann—currently interim chair of the University at Buffalo’s computer science and engineering department and former program manager at the Defense Advanced Research Projects Agency (DARPA)—told lawmakers that DARPA was already working on advanced deepfake detection tools but still had more work to do.

To support laws like Morelle’s, lawmakers have called for more funding for DARPA and the National Science Foundation to aid in ongoing efforts to create effective detection tools. At the same time, President Joe Biden—through a sweeping AI executive order—has pushed for solutions like watermarking deepfakes. Biden’s executive order also instructed the Department of Commerce to establish “standards and best practices for detecting AI-generated content and authenticating official content.”

Morelle is working to push his law through in 2024, warning that deepfake pornography is already affecting a “generation of young women like Francesca,” who are “ready to stand up against systemic oppression and stand in their power.”

Until the federal government figures out how to best prevent the sharing of AI-generated deepfakes, Francesca and her mom plan to keep pushing for change.

“Our voices are our secret weapon, and our words are like power-ups in Fortnite,” Francesca said. “My mom and I are advocating to create a world where being safe isn’t just a hope; it’s a reality for everyone.”

Sharing deepfake porn could lead to lengthy prison time under proposed law Read More »

report:-deepfake-porn-consistently-found-atop-google,-bing-search-results

Report: Deepfake porn consistently found atop Google, Bing search results

Shocking results —

Google vows to create more safeguards to protect victims of deepfake porn.

Report: Deepfake porn consistently found atop Google, Bing search results

Popular search engines like Google and Bing are making it easy to surface nonconsensual deepfake pornography by placing it at the top of search results, NBC News reported Thursday.

These controversial deepfakes superimpose faces of real women, often celebrities, onto the bodies of adult entertainers to make them appear to be engaging in real sex. Thanks in part to advances in generative AI, there is now a burgeoning black market for deepfake porn that could be discovered through a Google search, NBC News previously reported.

NBC News uncovered the problem by turning off safe search, then combining the names of 36 female celebrities with obvious search terms like “deepfakes,” “deepfake porn,” and “fake nudes.” Bing generated links to deepfake videos in top results 35 times, while Google did so 34 times. Bing also surfaced “fake nude photos of former teen Disney Channel female actors” using images where actors appear to be underaged.

A Google spokesperson told NBC that the tech giant understands “how distressing this content can be for people affected by it” and is “actively working to bring more protections to Search.”

According to Google’s spokesperson, this controversial content sometimes appears because “Google indexes content that exists on the web,” just “like any search engine.” But while searches using terms like “deepfake” may generate results consistently, Google “actively” designs “ranking systems to avoid shocking people with unexpected harmful or explicit content that they aren’t looking for,” the spokesperson said.

Currently, the only way to remove nonconsensual deepfake porn from Google search results is for the victim to submit a form personally or through an “authorized representative.” That form requires victims to meet three requirements: showing that they’re “identifiably depicted” in the deepfake; the “imagery in question is fake and falsely depicts” them as “nude or in a sexually explicit situation”; and the imagery was distributed without their consent.

While this gives victims some course of action to remove content, experts are concerned that search engines need to do more to effectively reduce the prevalence of deepfake pornography available online—which right now is rising at a rapid rate.

This emerging issue increasingly affects average people and even children, not just celebrities. Last June, child safety experts discovered thousands of realistic but fake AI child sex images being traded online, around the same time that the FBI warned that the use of AI-generated deepfakes in sextortion schemes was increasing.

And nonconsensual deepfake porn isn’t just being traded in black markets online. In November, New Jersey police launched a probe after high school teens used AI image generators to create and share fake nude photos of female classmates.

With tech companies seemingly slow to stop the rise in deepfakes, some states have passed laws criminalizing deepfake porn distribution. Last July, Virginia amended its existing law criminalizing revenge porn to include any “falsely created videographic or still image.” In October, New York passed a law specifically focused on banning deepfake porn, imposing a $1,000 fine and up to a year of jail time on violators. Congress has also introduced legislation that creates criminal penalties for spreading deepfake porn.

Although Google told NBC News that its search features “don’t allow manipulated media or sexually explicit content,” the outlet’s investigation seemingly found otherwise. NBC News also noted that Google’s Play app store hosts an app that was previously marketed for creating deepfake porn, despite prohibiting “apps determined to promote or perpetuate demonstrably misleading or deceptive imagery, videos and/or text.” This suggests that Google’s remediation efforts blocking deceptive imagery may be inconsistent.

Google told Ars that it will soon be strengthening its policies against apps featuring AI-generated restricted content in the Play Store. A generative AI policy taking effect on January 31 will require all apps to comply with developer policies that ban AI-generated restricted content, including deceptive content and content that facilitates the exploitation or abuse of children.

Experts told NBC News that “Google’s lack of proactive patrolling for abuse has made it and other search engines useful platforms for people looking to engage in deepfake harassment campaigns.”

Google is currently “in the process of building more expansive safeguards, with a particular focus on removing the need for known victims to request content removals one by one,” Google’s spokesperson told NBC News.

Microsoft’s spokesperson told Ars that Microsoft updated its process for reporting concerns with Bing searches to include non-consensual intimate imagery (NCII) used in “deepfakes” last August because it had become a “significant concern.” Like Google, Microsoft allows victims to report NCII deepfakes by submitting a web form to request removal from search results, understanding that any sharing of NCII is “a gross violation of personal privacy and dignity with devastating effects for victims.”

In the past, Microsoft President Brad Smith has said that among all dangers that AI poses, deepfakes worry him most, but deepfakes fueling “foreign cyber influence operations” seemingly concern him more than deepfake porn.

This story was updated on January 11 to include information on Google’s AI-generated content policy and on January 12 to include information from Microsoft.

Report: Deepfake porn consistently found atop Google, Bing search results Read More »