csam

“csam-generated-by-ai-is-still-csam,”-doj-says-after-rare-arrest

“CSAM generated by AI is still CSAM,” DOJ says after rare arrest

“CSAM generated by AI is still CSAM,” DOJ says after rare arrest

The US Department of Justice has started cracking down on the use of AI image generators to produce child sexual abuse materials (CSAM).

On Monday, the DOJ arrested Steven Anderegg, a 42-year-old “extremely technologically savvy” Wisconsin man who allegedly used Stable Diffusion to create “thousands of realistic images of prepubescent minors,” which were then distributed on Instagram and Telegram.

The cops were tipped off to Anderegg’s alleged activities after Instagram flagged direct messages that were sent on Anderegg’s Instagram account to a 15-year-old boy. Instagram reported the messages to the National Center for Missing and Exploited Children (NCMEC), which subsequently alerted law enforcement.

During the Instagram exchange, the DOJ found that Anderegg sent sexually explicit AI images of minors soon after the teen made his age known, alleging that “the only reasonable explanation for sending these images was to sexually entice the child.”

According to the DOJ’s indictment, Anderegg is a software engineer with “professional experience working with AI.” Because of his “special skill” in generative AI (GenAI), he was allegedly able to generate the CSAM using a version of Stable Diffusion, “along with a graphical user interface and special add-ons created by other Stable Diffusion users that specialized in producing genitalia.”

After Instagram reported Anderegg’s messages to the minor, cops seized Anderegg’s laptop and found “over 13,000 GenAI images, with hundreds—if not thousands—of these images depicting nude or semi-clothed prepubescent minors lasciviously displaying or touching their genitals” or “engaging in sexual intercourse with men.”

In his messages to the teen, Anderegg seemingly “boasted” about his skill in generating CSAM, the indictment said. The DOJ alleged that evidence from his laptop showed that Anderegg “used extremely specific and explicit prompts to create these images,” including “specific ‘negative’ prompts—that is, prompts that direct the GenAI model on what not to include in generated content—to avoid creating images that depict adults.” These go-to prompts were stored on his computer, the DOJ alleged.

Anderegg is currently in federal custody and has been charged with production, distribution, and possession of AI-generated CSAM, as well as “transferring obscene material to a minor under the age of 16,” the indictment said.

Because the DOJ suspected that Anderegg intended to use the AI-generated CSAM to groom a minor, the DOJ is arguing that there are “no conditions of release” that could prevent him from posing a “significant danger” to his community while the court mulls his case. The DOJ warned the court that it’s highly likely that any future contact with minors could go unnoticed, as Anderegg is seemingly tech-savvy enough to hide any future attempts to send minors AI-generated CSAM.

“He studied computer science and has decades of experience in software engineering,” the indictment said. “While computer monitoring may address the danger posed by less sophisticated offenders, the defendant’s background provides ample reason to conclude that he could sidestep such restrictions if he decided to. And if he did, any reoffending conduct would likely go undetected.”

If convicted of all four counts, he could face “a total statutory maximum penalty of 70 years in prison and a mandatory minimum of five years in prison,” the DOJ said. Partly because of “special skill in GenAI,” the DOJ—which described its evidence against Anderegg as “strong”—suggested that they may recommend a sentencing range “as high as life imprisonment.”

Announcing Anderegg’s arrest, Deputy Attorney General Lisa Monaco made it clear that creating AI-generated CSAM is illegal in the US.

“Technology may change, but our commitment to protecting children will not,” Monaco said. “The Justice Department will aggressively pursue those who produce and distribute child sexual abuse material—or CSAM—no matter how that material was created. Put simply, CSAM generated by AI is still CSAM, and we will hold accountable those who exploit AI to create obscene, abusive, and increasingly photorealistic images of children.”

“CSAM generated by AI is still CSAM,” DOJ says after rare arrest Read More »

snapchat-isn’t-liable-for-connecting-12-year-old-to-convicted-sex-offenders

Snapchat isn’t liable for connecting 12-year-old to convicted sex offenders

Snapchat isn’t liable for connecting 12-year-old to convicted sex offenders

A judge has dismissed a complaint from a parent and guardian of a girl, now 15, who was sexually assaulted when she was 12 years old after Snapchat recommended that she connect with convicted sex offenders.

According to the court filing, the abuse that the girl, C.O., experienced on Snapchat happened soon after she signed up for the app in 2019. Through its “Quick Add” feature, Snapchat “directed her” to connect with “a registered sex offender using the profile name JASONMORGAN5660.” After a little more than a week on the app, C.O. was bombarded with inappropriate images and subjected to sextortion and threats before the adult user pressured her to meet up, then raped her. Cops arrested the adult user the next day, resulting in his incarceration, but his Snapchat account remained active for three years despite reports of harassment, the complaint alleged.

Two years later, at 14, C.O. connected with another convicted sex offender on Snapchat, a former police officer who offered to give C.O. a ride to school and then sexually assaulted her. The second offender is also currently incarcerated, the judge’s opinion noted.

The lawsuit painted a picture of Snapchat’s ongoing neglect of minors it knows are being targeted by sexual predators. Prior to C.O.’s attacks, both adult users sent and requested sexually explicit photos, seemingly without the app detecting any child sexual abuse materials exchanged on the platform. C.O. had previously reported other adult accounts sending her photos of male genitals, but Snapchat allegedly “did nothing to block these individuals from sending her inappropriate photographs.”

Among other complaints, C.O.’s lawsuit alleged that Snapchat’s algorithm for its “Quick Add” feature was the problem. It allegedly recklessly works to detect when adult accounts are seeking to connect with young girls and, by design, sends more young girls their way—continually directing sexual predators toward vulnerable targets. Snapchat is allegedly aware of these abuses and, therefore, should be held liable for harm caused to C.O., the lawsuit argued.

Although C.O.’s case raised difficult questions, Judge Barbara Bellis ultimately agreed with Snapchat that Section 230 of the Communications Decency Act barred all claims and shielded Snap because “the allegations of this case fall squarely within the ambit of the immunity afforded to” platforms publishing third-party content.

According to Bellis, C.O.’s family had “clearly alleged” that Snap had failed to design its recommendations systems to block young girls from receiving messages from sexual predators. Specifically, Section 230 immunity shields Snap from liability in this case because Bellis considered the messages exchanged to be third-party content. Snapchat designing its recommendation systems to deliver content is a protected activity, Bellis ruled.

Internet law professor Eric Goldman wrote in his blog that Bellis’ “well-drafted and no-nonsense opinion” is “grounded” in precedent. Pointing to an “extremely similar” 2008 case against MySpace—”which reached the same outcome that Section 230 applies to offline sexual abuse following online messaging”—Goldman suggested that “the law has been quite consistent for a long time.”

However, as this case was being decided, a seemingly conflicting ruling in a Los Angeles court found that “Section 230 didn’t protect Snapchat from liability for allegedly connecting teens with drug dealers,” MediaPost noted. Bellis acknowledged this outlier opinion but did not appear to consider it persuasive.

Yet, at the end of her opinion, Bellis seemed to take aim at Section 230 as perhaps being too broad.

She quoted a ruling from the First Circuit Court of Appeals, which noted that some Section 230 cases, presumably like C.O.’s, are “hard” for courts not because “the legal issues defy resolution,” but because Section 230 requires that the court “deny relief to plaintiffs whose circumstances evoke outrage.” She then went on to quote an appellate court ruling on a similarly “difficult” Section 230 case that warned “without further legislative action,” there is “little” that courts can do “but join with other courts and commentators in expressing concern” with Section 230’s “broad scope.”

Ars could not immediately reach Snapchat or lawyers representing C.O.’s family for comment.

Snapchat isn’t liable for connecting 12-year-old to convicted sex offenders Read More »

backdoors-that-let-cops-decrypt-messages-violate-human-rights,-eu-court-says

Backdoors that let cops decrypt messages violate human rights, EU court says

Building of the European Court of Human Rights in Strasbourg (France).

Enlarge / Building of the European Court of Human Rights in Strasbourg (France).

The European Court of Human Rights (ECHR) has ruled that weakening end-to-end encryption disproportionately risks undermining human rights. The international court’s decision could potentially disrupt the European Commission’s proposed plans to require email and messaging service providers to create backdoors that would allow law enforcement to easily decrypt users’ messages.

This ruling came after Russia’s intelligence agency, the Federal Security Service (FSS), began requiring Telegram to share users’ encrypted messages to deter “terrorism-related activities” in 2017, ECHR’s ruling said. A Russian Telegram user alleged that FSS’s requirement violated his rights to a private life and private communications, as well as all Telegram users’ rights.

The Telegram user was apparently disturbed, moving to block required disclosures after Telegram refused to comply with an FSS order to decrypt messages on six users suspected of terrorism. According to Telegram, “it was technically impossible to provide the authorities with encryption keys associated with specific users,” and therefore, “any disclosure of encryption keys” would affect the “privacy of the correspondence of all Telegram users,” the ECHR’s ruling said.

For refusing to comply, Telegram was fined, and one court even ordered the app to be blocked in Russia, while dozens of Telegram users rallied to continue challenging the order to maintain Telegram services in Russia. Ultimately, users’ multiple court challenges failed, sending the case before the ECHR while Telegram services seemingly tenuously remained available in Russia.

The Russian government told the ECHR that “allegations that the security services had access to the communications of all users” were “unsubstantiated” because their request only concerned six Telegram users.

They further argued that Telegram providing encryption keys to FSB “did not mean that the information necessary to decrypt encrypted electronic communications would become available to its entire staff.” Essentially, the government believed that FSB staff’s “duty of discretion” would prevent any intrusion on private life for Telegram users as described in the ECHR complaint.

Seemingly most critically, the government told the ECHR that any intrusion on private lives resulting from decrypting messages was “necessary” to combat terrorism in a democratic society. To back up this claim, the government pointed to a 2017 terrorist attack that was “coordinated from abroad through secret chats via Telegram.” The government claimed that a second terrorist attack that year was prevented after the government discovered it was being coordinated through Telegram chats.

However, privacy advocates backed up Telegram’s claims that the messaging services couldn’t technically build a backdoor for governments without impacting all its users. They also argued that the threat of mass surveillance could be enough to infringe on human rights. The European Information Society Institute (EISI) and Privacy International told the ECHR that even if governments never used required disclosures to mass surveil citizens, it could have a chilling effect on users’ speech or prompt service providers to issue radical software updates weakening encryption for all users.

In the end, the ECHR concluded that the Telegram user’s rights had been violated, partly due to privacy advocates and international reports that corroborated Telegram’s position that complying with the FSB’s disclosure order would force changes impacting all its users.

The “confidentiality of communications is an essential element of the right to respect for private life and correspondence,” the ECHR’s ruling said. Thus, requiring messages to be decrypted by law enforcement “cannot be regarded as necessary in a democratic society.”

Martin Husovec, a law professor who helped to draft EISI’s testimony, told Ars that EISI is “obviously pleased that the Court has recognized the value of encryption and agreed with us that state-imposed weakening of encryption is a form of indiscriminate surveillance because it affects everyone’s privacy.”

Backdoors that let cops decrypt messages violate human rights, EU court says Read More »

cops-bogged-down-by-flood-of-fake-ai-child-sex-images,-report-says

Cops bogged down by flood of fake AI child sex images, report says

“Particularly heinous” —

Investigations tied to harmful AI sex images will grow “exponentially,” experts say.

Cops bogged down by flood of fake AI child sex images, report says

Law enforcement is continuing to warn that a “flood” of AI-generated fake child sex images is making it harder to investigate real crimes against abused children, The New York Times reported.

Last year, after researchers uncovered thousands of realistic but fake AI child sex images online, quickly every attorney general across the US called on Congress to set up a committee to squash the problem. But so far, Congress has moved slowly, while only a few states have specifically banned AI-generated non-consensual intimate imagery. Meanwhile, law enforcement continues to struggle with figuring out how to confront bad actors found to be creating and sharing images that, for now, largely exist in a legal gray zone.

“Creating sexually explicit images of children through the use of artificial intelligence is a particularly heinous form of online exploitation,” Steve Grocki, the chief of the Justice Department’s child exploitation and obscenity section, told The Times. Experts told The Washington Post in 2023 that risks of realistic but fake images spreading included normalizing child sexual exploitation, luring more children into harm’s way, and making it harder for law enforcement to find actual children being harmed.

In one example, the FBI announced earlier this year that an American Airlines flight attendant, Estes Carter Thompson III, was arrested “for allegedly surreptitiously recording or attempting to record a minor female passenger using a lavatory aboard an aircraft.” A search of Thompson’s iCloud revealed “four additional instances” where Thompson allegedly recorded other minors in the lavatory, as well as “over 50 images of a 9-year-old unaccompanied minor” sleeping in her seat. While police attempted to identify these victims, they also “further alleged that hundreds of images of AI-generated child pornography” were found on Thompson’s phone.

The troubling case seems to illustrate how AI-generated child sex images can be linked to real criminal activity while also showing how police investigations could be bogged down by attempts to distinguish photos of real victims from AI images that could depict real or fake children.

Robin Richards, the commander of the Los Angeles Police Department’s Internet Crimes Against Children task force, confirmed to the NYT that due to AI, “investigations are way more challenging.”

And because image generators and AI models that can be trained on photos of children are widely available, “using AI to alter photos” of children online “is becoming more common,” Michael Bourke—a former chief psychologist for the US Marshals Service who spent decades supporting investigations into sex offenses involving children—told the NYT. Richards said that cops don’t know what to do when they find these AI-generated materials.

Currently, there aren’t many cases involving AI-generated child sex abuse materials (CSAM), The NYT reported, but experts expect that number will “grow exponentially,” raising “novel and complex questions of whether existing federal and state laws are adequate to prosecute these crimes.”

Platforms struggle to monitor harmful AI images

At a Senate Judiciary Committee hearing today grilling Big Tech CEOs over child sexual exploitation (CSE) on their platforms, Linda Yaccarino—CEO of X (formerly Twitter)—warned in her opening statement that artificial intelligence is also making it harder for platforms to monitor CSE. Yaccarino suggested that industry collaboration is imperative to get ahead of the growing problem, as is providing more resources to law enforcement.

However, US law enforcement officials have indicated that platforms are also making it harder to police CSAM and CSE online. Platforms relying on AI to detect CSAM are generating “unviable reports” gumming up investigations managed by already underfunded law enforcement teams, The Guardian reported. And the NYT reported that other investigations are being thwarted by adding end-to-end encryption options to messaging services, which “drastically limit the number of crimes the authorities are able to track.”

The NYT report noted that in 2002, the Supreme Court struck down a law that had been on the books since 1996 preventing “virtual” or “computer-generated child pornography.” South Carolina’s attorney general, Alan Wilson, has said that AI technology available today may test that ruling, especially if minors continue to be harmed by fake AI child sex images spreading online. In the meantime, federal laws such as obscenity statutes may be used to prosecute cases, the NYT reported.

Congress has recently re-introduced some legislation to directly address AI-generated non-consensual intimate images after a wide range of images depicting fake AI porn of pop star Taylor Swift went viral this month. That includes the Disrupt Explicit Forged Images and Non-Consensual Edits Act, which creates a federal civil remedy for any victims of any age who are identifiable in AI images depicting them as nude or engaged in sexually explicit conduct or sexual scenarios.

There’s also the “Preventing Deepfakes of Intimate Images Act,” which seeks to “prohibit the non-consensual disclosure of digitally altered intimate images.” That was re-introduced this year after teen boys generated AI fake nude images of female classmates and spread them around a New Jersey high school last fall. Francesca Mani, one of the teen victims in New Jersey, was there to help announce the proposed law, which includes penalties of up to two years imprisonment for sharing harmful images.

“What happened to me and my classmates was not cool, and there’s no way I’m just going to shrug and let it slide,” Mani said. “I’m here, standing up and shouting for change, fighting for laws, so no one else has to feel as lost and powerless as I did on October 20th.”

Cops bogged down by flood of fake AI child sex images, report says Read More »

child-abusers-are-covering-their-tracks-with-better-use-of-crypto

Child abusers are covering their tracks with better use of crypto

silhouette of child

For those who trade in child sexual exploitation images and videos in the darkest recesses of the Internet, cryptocurrency has been both a powerful tool and a treacherous one. Bitcoin, for instance, has allowed denizens of that criminal underground to buy and sell their wares with no involvement from a bank or payment processor that might reveal their activities to law enforcement. But the public and surprisingly traceable transactions recorded in Bitcoin’s blockchain have sometimes led financial investigators directly to pedophiles’ doorsteps.

Now, after years of evolution in that grim cat-and-mouse game, new evidence suggests that online vendors of what was once commonly called “child porn” are learning to use cryptocurrency with significantly more skill and stealth—and that it’s helping them survive longer in the Internet’s most abusive industry.

Today, as part of an annual crime report, cryptocurrency tracing firm Chainalysis revealed new research that analyzed blockchains to measure the changing scale and sophistication of the cryptocurrency-based sale of child sexual abuse materials, or CSAM, over the past four years. Total revenue from CSAM sold for cryptocurrency has actually gone down since 2021, Chainalysis found, along with the number of new CSAM sellers accepting crypto. But the sophistication of crypto-based CSAM sales has been increasing. More and more, Chainalysis discovered, sellers of CSAM are using privacy tools like “mixers” and “privacy coins” that obfuscate their money trails across blockchains.

Perhaps because of that increased savvy, the company found that CSAM vendors active in 2023 persisted online—and evaded law enforcement—for a longer time than in any previous year, and about 57 percent longer than even in 2022. “Growing sophistication makes identification harder. It makes tracing harder, it makes prosecution harder, and it makes rescuing victims harder,” says Eric Jardine, the researcher who led the Chainalysis study. “So that sophistication dimension is probably the worst one you could see increasing over time.”

Better stealth, longer criminal lifespans

Scouring blockchains, Chainalysis researchers analyzed around 400 cryptocurrency wallets of CSAM sellers and more than 10,000 buyers who sent funds to them over the past four years. Their most disturbing finding in that broad economic study was that crypto-based CSAM sellers seem to have a longer lifespan online than ever, suggesting a kind of relative impunity. On average, CSAM vendors who were active in 2023 remained online for 884 days, compared with 560 days for those active in 2022 and just 112 days in 2020.

To explain that new longevity for some of the most harmful actors on the Internet, Chainalysis points to how CSAM vendors are increasingly laundering their proceeds with cryptocurrency mixers—services that blend users’ funds to make tracing more difficult—such as ChipMixer and Sinbad. (US and German law enforcement shut down ChipMixer in March 2023, but Sinbad remains online despite facing US sanctions for money laundering.) In 2023, Chainalysis found that about 46 percent of CSAM vendors used mixers, up from around 22 percent in 2020.

Chainalysis also found that CSAM vendors are increasingly using “instant exchanger” services that often collect little or no identifying information on traders and allow them to swap bitcoin for cryptocurrencies like Monero and Zcash—”privacy coins” designed to obfuscate or encrypt their blockchains to make tracing their cash-outs of profits far more difficult. Chainalysis’ Jardine says that Monero in particular seems to be gaining popularity among CSAM purveyors. In the company’s investigations, Chainalysis has seen it used repeatedly by CSAM sellers laundering funds through instant exchangers, and in multiple cases it has also seen CSAM forums post Monero addresses to solicit donations. While the instant exchangers did offer other cryptocurrencies, including the privacy coin Zcash, Chainalysis’ report states that “we believe Monero to be the currency of choice for laundering via instant exchangers.”

Child abusers are covering their tracks with better use of crypto Read More »