ai porn

meta-pirated-and-seeded-porn-for-years-to-train-ai,-lawsuit-says

Meta pirated and seeded porn for years to train AI, lawsuit says

Evidence may prove Meta seeded more content

Seeking evidence to back its own copyright infringement claims, Strike 3 Holdings searched “its archive of recorded infringement captured by its VXN Scan and Cross Reference tools” and found 47 “IP addresses identified as owned by Facebook infringing its copyright protected Works.”

The data allegedly demonstrates a “continued unauthorized distribution” over “several years.” And Meta allegedly did not stop its seeding after Strike 3 Holdings confronted the tech giant with this evidence—despite the IP data supposedly being verified through an industry-leading provider called Maxmind.

Strike 3 Holdings shared a screenshot of MaxMind’s findings. Credit: via Strike 3 Holdings’ complaint

Meta also allegedly attempted to “conceal its BitTorrent activities” through “six Virtual Private Clouds” that formed a “stealth network” of “hidden IP addresses,” the lawsuit alleged, which seemingly implicated a “major third-party data center provider” as a partner in Meta’s piracy.

An analysis of these IP addresses allegedly found “data patterns that matched infringement patterns seen on Meta’s corporate IP Addresses” and included “evidence of other activity on the BitTorrent network including ebooks, movies, television shows, music, and software.” The seemingly non-human patterns documented on both sets of IP addresses suggest the data was for AI training and not for personal use, Strike 3 Holdings alleged.

Perhaps most shockingly, considering that a Meta employee joked “torrenting from a corporate laptop doesn’t feel right,” Strike 3 Holdings further alleged that it found “at least one residential IP address of a Meta employee” infringing its copyrighted works. That suggests Meta may have directed an employee to torrent pirated data outside the office to obscure the data trail.

The adult site operator did not identify the employee or the major data center discussed in its complaint, noting in a subsequent filing that it recognized the risks to Meta’s business and its employees’ privacy of sharing sensitive information.

In total, the company alleged that evidence shows “well over 100,000 unauthorized distribution transactions” linked to Meta’s corporate IPs. Strike 3 Holdings is hoping the evidence will lead a jury to find Meta liable for direct copyright infringement or charge Meta with secondary and vicarious copyright infringement if the jury finds that Meta successfully distanced itself by using the third-party data center or an employee’s home IP address.

“Meta has the right and ability to supervise and/or control its own corporate IP addresses, as well as the IP addresses hosted in off-infra data centers, and the acts of its employees and agents infringing Plaintiffs’ Works through their residential IPs by using Meta’s AI script to obtain content through BitTorrent,” the complaint said.

Meta pirated and seeded porn for years to train AI, lawsuit says Read More »

trump’s-hasty-take-it-down-act-has-“gaping-flaws”-that-threaten-encryption

Trump’s hasty Take It Down Act has “gaping flaws” that threaten encryption


Legal challenges will likely immediately follow law’s passage, experts said.

Everyone expects that the Take It Down Act—which requires platforms to remove both real and artificial intelligence-generated non-consensual intimate imagery (NCII) within 48 hours of victims’ reports—will likely pass a vote in the House of Representatives tonight.

After that, it goes to Donald Trump’s desk, where the president has confirmed that he will promptly sign it into law, joining first lady Melania Trump in strongly campaigning for its swift passing. Victims-turned-advocates, many of them children, similarly pushed lawmakers to take urgent action to protect a growing number of victims from the increasing risks of being repeatedly targeted in fake sexualized images or revenge porn that experts say can quickly spread widely online.

Digital privacy experts tried to raise some concerns, warning that the law seemed overly broad and could trigger widespread censorship online. Given such a short window to comply, platforms will likely remove some content that may not be NCII, the Electronic Frontier Foundation (EFF) warned. And even more troublingly, the law does not explicitly exempt encrypted messages, which could potentially encourage platforms to one day break encryption due to the liability threat. Also, it seemed likely that the removal process could be abused by people who hope platforms will automatically remove any reported content, especially after Trump admitted that he would use the law to censor his enemies.

None of that feedback mattered, the EFF’s assistant director of federal affairs, Maddie Daly, told Ars. Lawmakers accepted no amendments in their rush to get the bill to Trump’s desk. There was “immense pressure,” Daly said, “to quickly pass this bill without full consideration.” Because of the rush, Daly suggested that the Take It Down Act still has “gaping flaws.”

While the tech law is expected to achieve the rare feat of getting through Congress at what experts told Ars was a record pace, both supporters and critics also expect that the law will just as promptly be challenged in courts.

Supporters have suggested that any litigation exposing flaws could result in amendments. They’re simultaneously bracing for that backlash, while preparing for the win ahead of the vote tonight and hoping that the law can survive any subsequent legal attacks mostly intact.

Experts disagree on encryption threats

In a press conference hosted by the nonprofit Americans for Responsible Innovation, Slade Bond—who serves as chair of public policy for the law firm Cuneo Gilbert & LaDuca, LLP—advocated for the law passing, warning, “we should not let caution be the enemy of progress.”

Bond joined other supporters in suggesting that apparent threats to encryption or online speech are “far-fetched.”

On his side was Encode’s vice president of public policy, Adam Billen, who pushed back on the claim that companies might break encryption due to the law’s vague text.

Billen predicted that “most encrypted content” wouldn’t be threatened with takedowns—supposedly including private or direct messages—because he argued that the law explicitly covers content that is published (and, importantly, not just distributed) on services that provide a “forum for specifically user generated content.”

“In our mind, encryption simply just is not a question under this bill, and we have explicitly opposed other legislation that would explicitly break encryption,” Billen said.

That may be one way of reading the law, but Daly told Ars that the EFF’s lawyers had a different take.

“We just don’t agree with that reading,” she said. “As drafted, what will likely pass the floor tonight is absolutely a threat to encryption. There are exemptions for email services, but direct messages, cloud storage, these are not exempted.”

Instead, she suggested that lawmakers jammed the law through without weighing amendments that might have explicitly shielded encryption or prevented politicized censorship.

At the supporters’ press conference, Columbia Law School professor Tim Wu suggested that, for lawmakers facing a public vote, opposing the bill became “totally untenable” because “there’s such obvious harm” and “such a visceral problem with fake porn, particularly of minors.”

Supporter calls privacy fears “hypothetical”

Stefan Turkheimer, vice president of public policy for the anti-sexual abuse organization RAINN, agreed with Wu that the growing problem required immediate regulatory action. While various reports have indicated for the past year that the amount of AI-generated NCII is rising, Turkheimer suggested that all statistics are severely undercounting and outdated as he noted that RAINN’s hotline reports are “doubling” monthly for this kind of abuse.

Coming up for a final vote amid an uptick in abuse reports, the Take It Down Act seeks to address harms that most people find “patently offensive,” Turkheimer said, suggesting it was the kind of bill that “can only get killed in the dark.”

However, Turkheimer was the only supporter at the press conference who indicated that texting may be part of the problem that the law could potentially address, perhaps justifying critics’ concerns. He thinks deterring victims’ harm is more important than weighing critics’ fears of censorship or other privacy risks.

“This is a real harm that a lot of people are experiencing, that every single time that they get a text message or they go on the Internet, they may see themselves in a non-consensual image,” Turkheimer said. “That is the real problem, and we’re balancing” that against “sort of a hypothetical problem on the other end, which is that some people’s speech might be affected.”

Remedying text-based abuse could become a privacy problem, an EFF blog suggested, since communications providers “may be served with notices they simply cannot comply with, given the fact that these providers cannot view the contents of messages on their platforms. Platforms may respond by abandoning encryption entirely in order to be able to monitor content—turning private conversations into surveilled spaces.”

That’s why Daly told Ars that the EFF “is very concerned about the effects of Take It Down,” viewing it as a “massive privacy violation.”

“Congress should protect victims of NCII, but we don’t think that Take It Down is the way to do this or that it will actually protect victims,” Daly said.

Further, the potential for politicians to weaponize the takedown system to censor criticism should not be ignored, the EFF warned in another blog. “There are no penalties whatsoever to dissuade a requester from simply insisting that content is NCII,” the blog noted, urging Congress to instead “focus on enforcing and improving the many existing civil and criminal laws that address NCII, rather than opting for a broad takedown regime.”

“Non-consensual intimate imagery is a serious problem that deserves serious consideration, not a hastily drafted, overbroad bill that sweeps in legal, protected speech,” the EFF said.

That call largely fell on deaf ears. Once the law passes, the EFF will continue recommending encrypted services as a reliable means to protect user privacy, Daly said, but remains concerned about the unintended consequences of the law’s vague encryption language.

Although Bond said that precedent is on supporters’ side—arguing “the Supreme Court has been abundantly clear for decades that the First Amendment is not a shield for the type of content that the Take It Down Act is designed to address,” like sharing child sexual abuse materials or engaging in sextortion—Daly said that the EFF remains optimistic that courts will intervene to prevent critics’ worst fears.

“We expect to see challenges to this,” Daly said. “I don’t think this will pass muster.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Trump’s hasty Take It Down Act has “gaping flaws” that threaten encryption Read More »

4chan-daily-challenge-sparked-deluge-of-explicit-ai-taylor-swift-images

4chan daily challenge sparked deluge of explicit AI Taylor Swift images

4chan daily challenge sparked deluge of explicit AI Taylor Swift images

4chan users who have made a game out of exploiting popular AI image generators appear to be at least partly responsible for the flood of fake images sexualizing Taylor Swift that went viral last month.

Graphika researchers—who study how communities are manipulated online—traced the fake Swift images to a 4chan message board that’s “increasingly” dedicated to posting “offensive” AI-generated content, The New York Times reported. Fans of the message board take part in daily challenges, Graphika reported, sharing tips to bypass AI image generator filters and showing no signs of stopping their game any time soon.

“Some 4chan users expressed a stated goal of trying to defeat mainstream AI image generators’ safeguards rather than creating realistic sexual content with alternative open-source image generators,” Graphika reported. “They also shared multiple behavioral techniques to create image prompts, attempt to avoid bans, and successfully create sexually explicit celebrity images.”

Ars reviewed a thread flagged by Graphika where users were specifically challenged to use Microsoft tools like Bing Image Creator and Microsoft Designer, as well as OpenAI’s DALL-E.

“Good luck,” the original poster wrote, while encouraging other users to “be creative.”

OpenAI has denied that any of the Swift images were created using DALL-E, while Microsoft has continued to claim that it’s investigating whether any of its AI tools were used.

Cristina López G., a senior analyst at Graphika, noted that Swift is not the only celebrity targeted in the 4chan thread.

“While viral pornographic pictures of Taylor Swift have brought mainstream attention to the issue of AI-generated non-consensual intimate images, she is far from the only victim,” López G. said. “In the 4chan community where these images originated, she isn’t even the most frequently targeted public figure. This shows that anyone can be targeted in this way, from global celebrities to school children.”

Originally, 404 Media reported that the harmful Swift images appeared to originate from 4chan and Telegram channels before spreading on X (formerly Twitter) and other social media. Attempting to stop the spread, X took the drastic step of blocking all searches for “Taylor Swift” for two days.

But López G. said that Graphika’s findings suggest that platforms will continue to risk being inundated with offensive content so long as 4chan users are determined to continue challenging each other to subvert image generator filters. Rather than expecting platforms to chase down the harmful content, López G. recommended that AI companies should get ahead of the problem, taking responsibility for outputs by paying attention to evolving tactics of toxic online communities reporting precisely how they’re getting around safeguards.

“These images originated from a community of people motivated by the ‘challenge’ of circumventing the safeguards of generative AI products, and new restrictions are seen as just another obstacle to ‘defeat,’” López G. said. “It’s important to understand the gamified nature of this malicious activity in order to prevent further abuse at the source.”

Experts told The Times that 4chan users were likely motivated to participate in these challenges for bragging rights and to “feel connected to a wider community.”

4chan daily challenge sparked deluge of explicit AI Taylor Swift images Read More »