Online, Densmore was known in so-called “Sewer” communities under the alias “Rabid.” During their investigation, the FBI found that Densmore kept a collection of “child pornography and bloody images of ‘Rabid,’ ‘Sewer,’ and ‘764’ carved into victims’ limbs, in some cases with razor blades and boxcutters nearby.” He also sexually exploited children, the DOJ said, including paying another 764 member to coerce a young girl to send a nude video with “Rabid” written on her chest. Gaining attention for his livestreams, he would threaten to release the coerced abusive images if kids did not participate “on cam,” the DOJ said.
“I have all your information,” Densmore threatened one victim. “I own you …. You do what I say now, kitten.”
In a speech Thursday, Assistant Attorney General Matthew G. Olsen described 764 as a terrorist network working “to normalize and weaponize the possession, production, and distribution of child sexual abuse material and other types of graphic and violent material” online. Ultimately, by attacking children, the group wants to “destroy civil society” and “collapse the US government,” Olsen said.
People like Densmore, Olsen said, join 764 to inflate their “own sense of fame,” with many having “an end-goal of forcing their victims to commit suicide on livestream for the 764 network’s entertainment.”
In the DOJ’s press release, the FBI warned parents and caregivers to pay attention to their kids’ activity both online and off. In addition to watching out for behavioral shifts or signs of self-harm, caregivers should also take note of any suspicious packages arriving, as 764 sometimes ships kids “razor blades, sexual devices, gifts, and other materials to use in creating online content.” Parents should also encourage kids to discuss online activity, especially if they feel threatened.
“If you are worried about someone who might be self-harming or is at risk of suicide, please consult a health care professional or call 911 in the event of an immediate threat,” the DOJ said.
If you or someone you know is feeling suicidal or in distress, please call the Suicide Prevention Lifeline number, 1-800-273-TALK (8255), which will put you in touch with a local crisis center.
Google-funded Character.AI added guardrails, but grieving mom wants a recall.
Sewell Setzer III and his mom Megan Garcia. Credit: via Center for Humane Technology
Fourteen-year-old Sewell Setzer III loved interacting with Character.AI’s hyper-realistic chatbots—with a limited version available for free or a “supercharged” version for a $9.99 monthly fee—most frequently chatting with bots named after his favorite Game of Thrones characters.
Within a month—his mother, Megan Garcia, later realized—these chat sessions had turned dark, with chatbots insisting they were real humans and posing as therapists and adult lovers seeming to proximately spur Sewell to develop suicidal thoughts. Within a year, Setzer “died by a self-inflicted gunshot wound to the head,” a lawsuit Garcia filed Wednesday said.
As Setzer became obsessed with his chatbot fantasy life, he disconnected from reality, her complaint said. Detecting a shift in her son, Garcia repeatedly took Setzer to a therapist, who diagnosed her son with anxiety and disruptive mood disorder. But nothing helped to steer Setzer away from the dangerous chatbots. Taking away his phone only intensified his apparent addiction.
Chat logs showed that some chatbots repeatedly encouraged suicidal ideation while others initiated hypersexualized chats “that would constitute abuse if initiated by a human adult,” a press release from Garcia’s legal team said.
Perhaps most disturbingly, Setzer developed a romantic attachment to a chatbot called Daenerys. In his last act before his death, Setzer logged into Character.AI where the Daenerys chatbot urged him to “come home” and join her outside of reality.
In her complaint, Garcia accused Character.AI makers Character Technologies—founded by former Google engineers Noam Shazeer and Daniel De Freitas Adiwardana—of intentionally designing the chatbots to groom vulnerable kids. Her lawsuit further accused Google of largely funding the risky chatbot scheme at a loss in order to hoard mounds of data on minors that would be out of reach otherwise.
The chatbot makers are accused of targeting Setzer with “anthropomorphic, hypersexualized, and frighteningly realistic experiences, while programming” Character.AI to “misrepresent itself as a real person, a licensed psychotherapist, and an adult lover, ultimately resulting in [Setzer’s] desire to no longer live outside of [Character.AI,] such that he took his own life when he was deprived of access to [Character.AI.],” the complaint said.
By allegedly releasing the chatbot without appropriate safeguards for kids, Character Technologies and Google potentially harmed millions of kids, the lawsuit alleged. Represented by legal teams with the Social Media Victims Law Center (SMVLC) and the Tech Justice Law Project (TJLP), Garcia filed claims of strict product liability, negligence, wrongful death and survivorship, loss of filial consortium, and unjust enrichment.
“A dangerous AI chatbot app marketed to children abused and preyed on my son, manipulating him into taking his own life,” Garcia said in the press release. “Our family has been devastated by this tragedy, but I’m speaking out to warn families of the dangers of deceptive, addictive AI technology and demand accountability from Character.AI, its founders, and Google.”
Character.AI added guardrails
It’s clear that the chatbots could’ve included more safeguards, as Character.AI has since raised the age requirement from 12 years old and up to 17-plus. And yesterday, Character.AI posted a blog outlining new guardrails for minor users added within six months of Setzer’s death in February. Those include changes “to reduce the likelihood of encountering sensitive or suggestive content,” improved detection and intervention in harmful chat sessions, and “a revised disclaimer on every chat to remind users that the AI is not a real person.”
“We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family,” a Character.AI spokesperson told Ars. “As a company, we take the safety of our users very seriously, and our Trust and Safety team has implemented numerous new safety measures over the past six months, including a pop-up directing users to the National Suicide Prevention Lifeline that is triggered by terms of self-harm or suicidal ideation.”
Asked for comment, Google noted that Character.AI is a separate company in which Google has no ownership stake and denied involvement in developing the chatbots.
However, according to the lawsuit, former Google engineers at Character Technologies “never succeeded in distinguishing themselves from Google in a meaningful way.” Allegedly, the plan all along was to let Shazeer and De Freitas run wild with Character.AI—allegedly at an operating cost of $30 million per month despite low subscriber rates while profiting barely more than a million per month—without impacting the Google brand or sparking antitrust scrutiny.
Character Technologies and Google will likely file their response within the next 30 days.
Lawsuit: New chatbot feature spikes risks to kids
While the lawsuit alleged that Google is planning to integrate Character.AI into Gemini—predicting that Character.AI will soon be dissolved as it’s allegedly operating at a substantial loss—Google clarified that Google has no plans to use or implement the controversial technology in its products or AI models. Were that to change, Google noted that the tech company would ensure safe integration into any Google product, including adding appropriate child safety guardrails.
Garcia is hoping a US district court in Florida will agree that Character.AI’s chatbots put profits over human life. Citing harms including “inconceivable mental anguish and emotional distress,” as well as costs of Setzer’s medical care, funeral expenses, Setzer’s future job earnings, and Garcia’s lost earnings, she’s seeking substantial damages.
That includes requesting disgorgement of unjustly earned profits, noting that Setzer had used his snack money to pay for a premium subscription for several months while the company collected his seemingly valuable personal data to train its chatbots.
And “more importantly,” Garcia wants to prevent Character.AI “from doing to any other child what it did to hers, and halt continued use of her 14-year-old child’s unlawfully harvested data to train their product how to harm others.”
Garcia’s complaint claimed that the conduct of the chatbot makers was “so outrageous in character, and so extreme in degree, as to go beyond all possible bounds of decency.” Acceptable remedies could include a recall of Character.AI, restricting use to adults only, age-gating subscriptions, adding reporting mechanisms to heighten awareness of abusive chat sessions, and providing parental controls.
Character.AI could also update chatbots to protect kids further, the lawsuit said. For one, the chatbots could be designed to stop insisting that they are real people or licensed therapists.
But instead of these updates, the lawsuit warned that Character.AI in June added a new feature that only heightens risks for kids.
Part of what addicted Setzer to the chatbots, the lawsuit alleged, was a one-way “Character Voice” feature “designed to provide consumers like Sewell with an even more immersive and realistic experience—it makes them feel like they are talking to a real person.” Setzer began using the feature as soon as it became available in January 2024.
Now, the voice feature has been updated to enable two-way conversations, which the lawsuit alleged “is even more dangerous to minor customers than Character Voice because it further blurs the line between fiction and reality.”
“Even the most sophisticated children will stand little chance of fully understanding the difference between fiction and reality in a scenario where Defendants allow them to interact in real time with AI bots that sound just like humans—especially when they are programmed to convincingly deny that they are AI,” the lawsuit said.
“By now we’re all familiar with the dangers posed by unregulated platforms developed by unscrupulous tech companies—especially for kids,” Tech Justice Law Project director Meetali Jain said in the press release. “But the harms revealed in this case are new, novel, and, honestly, terrifying. In the case of Character.AI, the deception is by design, and the platform itself is the predator.”
Another lawyer representing Garcia and the founder of the Social Media Victims Law Center, Matthew Bergman, told Ars that seemingly none of the guardrails that Character.AI has added is enough to deter harms. Even raising the age limit to 17 only seems to effectively block kids from using devices with strict parental controls, as kids on less-monitored devices can easily lie about their ages.
“This product needs to be recalled off the market,” Bergman told Ars. “It is unsafe as designed.”
If you or someone you know is feeling suicidal or in distress, please call the Suicide Prevention Lifeline number, 1-800-273-TALK (8255), which will put you in touch with a local crisis center.
Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.
“I cannot accept this evidence without a much better explanation of Mr. Bogatz’s path of reasoning,” Wheelahan wrote.
Wheelahan emphasized that the Nevada merger law specifically stipulated that “all debts, liabilities, obligations and duties of the Company shall thenceforth remain with or be attached to, as the case may be, the Acquiror and may be enforced against it to the same extent as if it had incurred or contracted all such debts, liabilities, obligations, and duties.” And Bogatz’s testimony failed to “grapple with the significance” of this, Wheelahan said.
Overall, Wheelahan considered Bogatz’s testimony on X’s merger-acquired liabilities “strained,” while deeming the government’s US merger law expert Alexander Pyle to be “honest and ready to make appropriate concessions,” even while some of his testimony was “not of assistance.”
Luckily, it seemed that Wheelahan had no trouble drawing his own conclusion after analyzing Nevada’s merger law.
“I find that a Nevada court would likely hold that the word ‘liabilities'” in the merger law “is broad enough on its proper construction under Nevada law to encompass non-pecuniary liabilities, such as the obligation to respond to the reporting notice,” Wheelahan wrote. “X Corp has therefore failed to show that it was not required to respond to the reporting notice.”
Because X “failed on all its claims,” the social media company must cover costs from the appeal, and X’s costs in fighting the initial fine will seemingly only increase from here.
Fighting fine likely to more than double X costs
In a press release celebrating the ruling, eSafety Commissioner Julie Inman Grant criticized X’s attempt to use the merger to avoid complying with Australia’s Online Safety Act.
Cops are now using AI to generate images of fake kids, which are helping them catch child predators online, a lawsuit filed by the state of New Mexico against Snapchat revealed this week.
According to the complaint, the New Mexico Department of Justice launched an undercover investigation in recent months to prove that Snapchat “is a primary social media platform for sharing child sexual abuse material (CSAM)” and sextortion of minors, because its “algorithm serves up children to adult predators.”
As part of their probe, an investigator “set up a decoy account for a 14-year-old girl, Sexy14Heather.”
Despite Snapchat setting the fake minor’s profile to private and the account not adding any followers, “Heather” was soon recommended widely to “dangerous accounts, including ones named ‘child.rape’ and ‘pedo_lover10,’ in addition to others that are even more explicit,” the New Mexico DOJ said in a press release.
And after “Heather” accepted a follow request from just one account, the recommendations got even worse. “Snapchat suggested over 91 users, including numerous adult users whose accounts included or sought to exchange sexually explicit content,” New Mexico’s complaint alleged.
“Snapchat is a breeding ground for predators to collect sexually explicit images of children and to find, groom, and extort them,” New Mexico’s complaint alleged.
Posing as “Sexy14Heather,” the investigator swapped messages with adult accounts, including users who “sent inappropriate messages and explicit photos.” In one exchange with a user named “50+ SNGL DAD 4 YNGR,” the fake teen “noted her age, sent a photo, and complained about her parents making her go to school,” prompting the user to send “his own photo” as well as sexually suggestive chats. Other accounts asked “Heather” to “trade presumably explicit content,” and several “attempted to coerce the underage persona into sharing CSAM,” the New Mexico DOJ said.
“Heather” also tested out Snapchat’s search tool, finding that “even though she used no sexually explicit language, the algorithm must have determined that she was looking for CSAM” when she searched for other teen users. It “began recommending users associated with trading” CSAM, including accounts with usernames such as “naughtypics,” “addfortrading,” “teentr3de,” “gayhorny13yox,” and “teentradevirgin,” the investigation found, “suggesting that these accounts also were involved in the dissemination of CSAM.”
This novel use of AI was prompted after Albuquerque police indicted a man, Alejandro Marquez, who pled guilty and was sentenced to 18 years for raping an 11-year-old girl he met through Snapchat’s Quick Add feature in 2022, New Mexico’s complaint said. More recently, the New Mexico complaint said, an Albuquerque man, Jeremy Guthrie, was arrested and sentenced this summer for “raping a 12-year-old girl who he met and cultivated over Snapchat.”
In the past, police have posed as kids online to catch child predators using photos of younger-looking adult women or even younger photos of police officers. Using AI-generated images could be considered a more ethical way to conduct these stings, a lawyer specializing in sex crimes, Carrie Goldberg, told Ars, because “an AI decoy profile is less problematic than using images of an actual child.”
But using AI could complicate investigations and carry its own ethical concerns, Goldberg warned, as child safety experts and law enforcement warn that the Internet is increasingly swamped with AI-generated CSAM.
“In terms of AI being used for entrapment, defendants can defend themselves if they say the government induced them to commit a crime that they were not already predisposed to commit,” Goldberg told Ars. “Of course, it would be ethically concerning if the government were to create deepfake AI child sexual abuse material (CSAM), because those images are illegal, and we don’t want more CSAM in circulation.”
Experts have warned that AI image generators should never be trained on datasets that combine images of real kids with explicit content to avoid any instances of AI-generated CSAM, which is particularly harmful when it appears to depict a real kid or an actual victim of child abuse.
In the New Mexico complaint, only one AI-generated image is included, so it’s unclear how widely the state’s DOJ is using AI or if cops are possibly using more advanced methods to generate multiple images of the same fake kid. It’s also unclear what ethical concerns were weighed before cops began using AI decoys.
The New Mexico DOJ did not respond to Ars’ request for comment.
Goldberg told Ars that “there ought to be standards within law enforcement with how to use AI responsibly,” warning that “we are likely to see more entrapment defenses centered around AI if the government is using the technology in a manipulative way to pressure somebody into committing a crime.”
Several kids died taking part in the “Blackout Challenge,” which Third Circuit Judge Patty Shwartz described in her opinion as encouraging users “to choke themselves with belts, purse strings, or anything similar until passing out.”
Because TikTok promoted the challenge in children’s feeds, Tawainna Anderson counted among mourning parents who attempted to sue TikTok in 2022. Ultimately, she was told that TikTok was not responsible for recommending the video that caused the death of her daughter Nylah.
In her opinion, Shwartz wrote that Section 230 does not bar Anderson from arguing that TikTok’s algorithm amalgamates third-party videos, “which results in ‘an expressive product’ that ‘communicates to users’ [that a] curated stream of videos will be interesting to them.”
The judge cited a recent Supreme Court ruling that “held that a platform’s algorithm that reflects ‘editorial judgments’ about compiling the third-party speech it wants in the way it wants’ is the platform’s own ‘expressive product’ and is therefore protected by the First Amendment,” Shwartz wrote.
Because TikTok’s For You Page (FYP) algorithm decides which third-party speech to include or exclude and organizes content, TikTok’s algorithm counts as TikTok’s own “expressive activity.” That “expressive activity” is not protected by Section 230, which only shields platforms from liability for third-party speech, not platforms’ own speech, Shwartz wrote.
The appeals court has now remanded the case to the district court to rule on Anderson’s remaining claims.
Section 230 doesn’t permit “indifference” to child death
According to Shwartz, if Nylah had discovered the “Blackout Challenge” video by searching on TikTok, the platform would not be liable, but because she found it on her FYP, TikTok transformed into “an affirmative promoter of such content.”
Now TikTok will have to face Anderson’s claims that are “premised upon TikTok’s algorithm,” Shwartz said, as well as potentially other claims that Anderson may reraise that may be barred by Section 230. The District Court will have to determine which claims are barred by Section 230 “consistent” with the Third Circuit’s ruling, though.
Concurring in part, circuit Judge Paul Matey noted that by the time Nylah took part in the “Blackout Challenge,” TikTok knew about the dangers and “took no and/or completely inadequate action to extinguish and prevent the spread of the Blackout Challenge and specifically to prevent the Blackout Challenge from being shown to children on their” FYPs.
Matey wrote that Section 230 does not shield corporations “from virtually any claim loosely related to content posted by a third party,” as TikTok seems to believe. He encouraged a “far narrower” interpretation of Section 230 to stop companies like TikTok from reading the Communications Decency Act as permitting “casual indifference to the death of a 10-year-old girl.”
“Anderson’s estate may seek relief for TikTok’s knowing distribution and targeted recommendation of videos it knew could be harmful,” Matey wrote. That includes pursuing “claims seeking to hold TikTok liable for continuing to host the Blackout Challenge videos knowing they were causing the death of children” and “claims seeking to hold TikTok liable for its targeted recommendations of videos it knew were harmful.”
“The company may decide to curate the content it serves up to children to emphasize the lowest virtues, the basest tastes,” Matey wrote. “But it cannot claim immunity that Congress did not provide.”
Anderson’s lawyers at Jeffrey Goodman, Saltz Mongeluzzi & Bendesky PC previously provided Ars with a statement after the prior court’s ruling, indicating that parents weren’t prepared to stop fighting in 2022.
“The federal Communications Decency Act was never intended to allow social media companies to send dangerous content to children, and the Andersons will continue advocating for the protection of our children from an industry that exploits youth in the name of profits,” lawyers said.
TikTok did not immediately respond to Ars’ request to comment but previously vowed to “remain vigilant in our commitment to user safety” and “immediately remove” Blackout Challenge content “if found.”
The US Department of Justice sued TikTok today, accusing the short-video platform of illegally collecting data on millions of kids and demanding a permanent injunction “to put an end to TikTok’s unlawful massive-scale invasions of children’s privacy.”
The DOJ said that TikTok had violated the Children’s Online Privacy Protection Act of 1998 (COPPA) and the Children’s Online Privacy Protection Rule (COPPA Rule), claiming that TikTok allowed kids “to create and access accounts without their parents’ knowledge or consent,” collected “data from those children,” and failed to “comply with parents’ requests to delete their children’s accounts and information.”
The COPPA Rule requires TikTok to prove that it does not target kids as its primary audience, the DOJ said, and TikTok claims to satisfy that “by requiring users creating accounts to report their birthdates.”
However, even if a child inputs their real birthdate, the DOJ said, TikTok does nothing to stop them from restarting the process and using a fake birthdate. Dodging TikTok’s age gate has been easy for millions of kids, the DOJ alleged, and TikTok knows that, collecting their information anyway and neglecting to delete information even when child users “identify themselves as children.”
“The precise magnitude” of TikTok’s violations “is difficult to determine,” the DOJ’s complaint said. But TikTok’s “internal analyses show that millions of TikTok’s US users are children under the age of 13.”
“For example, the number of US TikTok users that Defendants classified as age 14 or younger in 2020 was millions higher than the US Census Bureau’s estimate of the total number of 13- and 14-year-olds in the United States, suggesting that many of those users were children younger than 13,” the DOJ said.
TikTok seemingly risks huge fines if the DOJ proves its case. The DOJ has asked a jury to agree that damages are owed for each “collection, use, or disclosure of a child’s personal information” that violates the COPPA Rule, with likely multiple violations spanning millions of children’s accounts. And any recent violations could cost more, as the DOJ noted that the FTC Act authorizes civil penalties up to $51,744 “for each violation of the Rule assessed after January 10, 2024.”
A TikTok spokesperson told Ars that TikTok plans to fight the lawsuit, which is part of the US’s ongoing battle with the app. Currently, TikTok is fighting a nationwide ban that was passed this year, due to growing political tensions with its China-based owner and lawmakers’ concerns over TikTok’s data collection and alleged repeated spying on Americans.
“We disagree with these allegations, many of which relate to past events and practices that are factually inaccurate or have been addressed,” TikTok’s spokesperson told Ars. “We are proud of our efforts to protect children, and we will continue to update and improve the platform. To that end, we offer age-appropriate experiences with stringent safeguards, proactively remove suspected underage users, and have voluntarily launched features such as default screentime limits, Family Pairing, and additional privacy protections for minors.”
The DOJ seems to think damages are owed for past as well as possibly current violations. It claimed that TikTok already has more sophisticated ways to identify the ages of child users for ad-targeting but doesn’t use the same technology to block underage sign-ups because TikTok is allegedly unwilling to dedicate resources to widely police kids on its platform.
“By adhering to these deficient policies, Defendants actively avoid deleting the accounts of users they know to be children,” the DOJ alleged, claiming that “internal communications reveal that Defendants’ employees were aware of this issue.”
The Kids Online Safety Act (KOSA) easily passed the Senate today despite critics’ concerns that the bill may risk creating more harm than good for kids and perhaps censor speech for online users of all ages if it’s signed into law.
KOSA received broad bipartisan support in the Senate, passing with a 91–3 vote alongside the Children’s Online Privacy Protection Action (COPPA) 2.0. Both laws seek to control how much data can be collected from minors, as well as regulate the platform features that could harm children’s mental health.
Only Senators Ron Wyden (D-Ore.), Rand Paul (R-Ky.), and Mike Lee (R-Utah) opposed the bills.
In an op-ed for The Courier-Journal, Paul argued that KOSA imposes a “duty of care” to mitigate harms to minors on their platforms that “will not only stifle free speech, but it will deprive Americans of the benefits of our technological advancements.”
“With the Internet, today’s children have the world at their fingertips,” Paul wrote, but if KOSA passes, even allegedly benign content like “pro-life messages” or discussion of a teen overcoming an eating disorder could be censored if platforms fear compliance issues.
“While doctors’ and therapists’ offices close at night and on weekends, support groups are available 24 hours a day, seven days a week for people who share similar concerns or have the same health problems. Any solution to protect kids online must ensure the positive aspects of the Internet are preserved,” Paul wrote.
During a KOSA critics’ press conference today, Dara Adkison—the executive director of a group providing resources for transgender youths called TransOhio—expressed concerns that lawmakers would target sites like TransOhio if the law also passed in the House, where the bill heads next.
“I’ve literally had legislators tell me to my face that they would love to see our website taken off the Internet because they don’t want people to have the kinds of vital community resources that we provide,” Adkison said.
Paul argued that what was considered harmful to kids was subjective, noting that a key flaw with KOSA was that “KOSA does not explicitly define the term ‘mental health disorder.'” Instead, platforms are to refer to the definition in “the fifth edition of the Diagnostic and Statistical Manual of Mental Health Disorders” or “the most current successor edition.”
“That means the scope of the bill could change overnight without any action from America’s elected representatives,” Paul warned, suggesting that “KOSA opens the door to nearly limitless content regulation because platforms will censor users rather than risk liability.”
Ahead of the vote, Senator Richard Blumenthal (D-Conn.)—who co-sponsored KOSA—denied that the bill strove to regulate content, The Hill reported. To Blumenthal and other KOSA supporters, its aim instead is to ensure that social media is “safe by design” for young users.
According to The Washington Post, KOSA and COPPA 2.0 passing “represent the most significant restrictions on tech platforms to clear a chamber of Congress in decades.” However, while President Joe Biden has indicated he would be willing to sign the bill into law, most seem to agree that KOSA will struggle to pass in the House of Representatives.
A senior tech policy director for Chamber of Progress—a progressive tech industry policy coalition—Todd O’Boyle, has said that currently there is “substantial opposition” in the House. O’Boyle said that he expects that the political divide will be enough to block KOSA’s passage and prevent giving “the power” to the Federal Trade Commission (FTC) or “the next president” to “crack down on online speech” or otherwise pose “a massive threat to our constitutional rights.”
“If there’s one thing the far-left and far-right agree on, it’s that the next chair of the FTC shouldn’t get to decide what online posts are harmful,” O’Boyle said.
After years of controversies over plans to scan iCloud to find more child sexual abuse materials (CSAM), Apple abandoned those plans last year. Now, child safety experts have accused the tech giant of not only failing to flag CSAM exchanged and stored on its services—including iCloud, iMessage, and FaceTime—but also allegedly failing to report all the CSAM that is flagged.
The United Kingdom’s National Society for the Prevention of Cruelty to Children (NSPCC) shared UK police data with The Guardian showing that Apple is “vastly undercounting how often” CSAM is found globally on its services.
According to the NSPCC, police investigated more CSAM cases in just the UK alone in 2023 than Apple reported globally for the entire year. Between April 2022 and March 2023 in England and Wales, the NSPCC found, “Apple was implicated in 337 recorded offenses of child abuse images.” But in 2023, Apple only reported 267 instances of CSAM to the National Center for Missing & Exploited Children (NCMEC), supposedly representing all the CSAM on its platforms worldwide, The Guardian reported.
Large tech companies in the US must report CSAM to NCMEC when it’s found, but while Apple reports a couple hundred CSAM cases annually, its big tech peers like Meta and Google report millions, NCMEC’s report showed. Experts told The Guardian that there’s ongoing concern that Apple “clearly” undercounts CSAM on its platforms.
Richard Collard, the NSPCC’s head of child safety online policy, told The Guardian that he believes Apple’s child safety efforts need major improvements.
“There is a concerning discrepancy between the number of UK child abuse image crimes taking place on Apple’s services and the almost negligible number of global reports of abuse content they make to authorities,” Collard told The Guardian. “Apple is clearly behind many of their peers in tackling child sexual abuse when all tech firms should be investing in safety and preparing for the rollout of the Online Safety Act in the UK.”
Outside the UK, other child safety experts shared Collard’s concerns. Sarah Gardner, the CEO of a Los Angeles-based child protection organization called the Heat Initiative, told The Guardian that she considers Apple’s platforms a “black hole” obscuring CSAM. And she expects that Apple’s efforts to bring AI to its platforms will intensify the problem, potentially making it easier to spread AI-generated CSAM in an environment where sexual predators may expect less enforcement.
“Apple does not detect CSAM in the majority of its environments at scale, at all,” Gardner told The Guardian.
Gardner agreed with Collard that Apple is “clearly underreporting” and has “not invested in trust and safety teams to be able to handle this” as it rushes to bring sophisticated AI features to its platforms. Last month, Apple integrated ChatGPT into Siri, iOS and Mac OS, perhaps setting expectations for continually enhanced generative AI features to be touted in future Apple gear.
“The company is moving ahead to a territory that we know could be incredibly detrimental and dangerous to children without the track record of being able to handle it,” Gardner told The Guardian.
So far, Apple has not commented on the NSPCC’s report. Last September, Apple did respond to the Heat Initiative’s demands to detect more CSAM, saying that rather than focusing on scanning for illegal content, its focus is on connecting vulnerable or victimized users directly with local resources and law enforcement that can assist them in their communities.
OnlyFans’ paywalls make it hard for police to detect child sexual abuse materials (CSAM) on the platform, Reuters reported—especially new CSAM that can be harder to uncover online.
Because each OnlyFans creator posts their content behind their own paywall, five specialists in online child sexual abuse told Reuters that it’s hard to independently verify just how much CSAM is posted. Cops would seemingly need to subscribe to each account to monitor the entire platform, one expert who aids in police CSAM investigations, Trey Amick, suggested to Reuters.
OnlyFans claims that the amount of CSAM on its platform is extremely low. Out of 3.2 million accounts sharing “hundreds of millions of posts,” OnlyFans only removed 347 posts as suspected CSAM in 2023. Each post was voluntarily reported to the CyberTipline of the National Center for Missing and Exploited Children (NCMEC), which OnlyFans told Reuters has “full access” to monitor content on the platform.
However, that intensified monitoring seems to have only just begun. NCMEC just got access to OnlyFans in late 2023, the child safety group told Reuters. And NCMEC seemingly can’t scan the entire platform at once, telling Reuters that its access was “limited” exclusively “to OnlyFans accounts reported to its CyberTipline or connected to a missing child case.”
Similarly, OnlyFans told Reuters that police do not have to subscribe to investigate a creator’s posts, but the platform only grants free access to accounts when there’s an active investigation. That means once police suspect that CSAM is being exchanged on an account, they get “full access” to review “account details, content, and direct messages,” Reuters reported.
But that access doesn’t aid police hoping to uncover CSAM shared on accounts not yet flagged for investigation. That’s a problem, a Reuters investigation found, because it’s easy for creators to make a new account, where bad actors can mask their identities to avoid OnlyFans’ “controls meant to hold account holders responsible for their own content,” one detective, Edward Scoggins, told Reuters.
Evading OnlyFans’ CSAM detection seems easy
OnlyFans told Reuters that “would-be creators must provide at least nine pieces of personally identifying information and documents, including bank details, a selfie while holding a government photo ID, and—in the United States—a Social Security number.”
“All this is verified by human judgment and age-estimation technology that analyzes the selfie,” OnlyFans told Reuters. On OnlyFans’ site, the platform further explained that “we continuously scan our platform to prevent the posting of CSAM. All our content moderators are trained to identify and swiftly report any suspected CSAM.”
However, Reuters found that none of these controls worked 100 percent of the time to stop bad actors from sharing CSAM. And the same seemingly holds true for some minors motivated to post their own explicit content. One girl told Reuters that she evaded age verification first by using an adult’s driver’s license to sign up, then by taking over an account of an adult user.
An OnlyFans spokesperson told Ars that low amounts of CSAM reported to NCMEC is a “testament to the rigorous safety controls OnlyFans has in place.”
“OnlyFans is proud of the work we do to aggressively target, report, and support the investigations and prosecutions of anyone who seeks to abuse our platform in this way,” OnlyFans’ spokesperson told Ars. “Unlike many other platforms, the lack of anonymity and absence of end-to-end encryption on OnlyFans means that reports are actionable by law enforcement and prosecutors.”
US Surgeon General Vivek Murthy wants to put a warning label on social media platforms, alerting young users of potential mental health harms.
“It is time to require a surgeon general’s warning label on social media platforms stating that social media is associated with significant mental health harms for adolescents,” Murthy wrote in a New York Times op-ed published Monday.
Murthy argued that a warning label is urgently needed because the “mental health crisis among young people is an emergency,” and adolescents overusing social media can increase risks of anxiety and depression and negatively impact body image.
Spiking mental health issues for young people began long before the surgeon general declared a youth behavioral health crisis during the pandemic, an April report from a New York nonprofit called the United Health Fund found. Between 2010 and 2022, “adolescents ages 12–17 have experienced the highest year-over-year increase in having a major depressive episode,” the report said. By 2022, 6.7 million adolescents in the US were reporting “suffering from one or more behavioral health condition.”
However, mental health experts have maintained that the science is divided, showing that kids can also benefit from social media depending on how they use it. Murthy’s warning label seems to ignore that tension, prioritizing raising awareness of potential harms even though parents potentially restricting online access due to the proposed label could end up harming some kids. The label also would seemingly fail to acknowledge known risks to young adults, whose brains continue developing after the age of 18.
To create the proposed warning label, Murthy is seeking better data from social media companies that have not always been transparent about studying or publicizing alleged harms to kids on their platforms. Last year, a Meta whistleblower, Arturo Bejar, testified to a US Senate subcommittee that Meta overlooks obvious reforms and “continues to publicly misrepresent the level and frequency of harm that users, especially children, experience” on its platforms Facebook and Instagram.
According to Murthy, the US is past the point of accepting promises from social media companies to make their platforms safer. “We need proof,” Murthy wrote.
“Companies must be required to share all of their data on health effects with independent scientists and the public—currently they do not—and allow independent safety audits,” Murthy wrote, arguing that parents need “assurance that trusted experts have investigated and ensured that these platforms are safe for our kids.”
“A surgeon general’s warning label, which requires congressional action, would regularly remind parents and adolescents that social media has not been proved safe,” Murthy wrote.
Kids need safer platforms, not a warning label
Leaving parents to police kids’ use of platforms is unacceptable, Murthy said, because their efforts are “pitted against some of the best product engineers and most well-resourced companies in the world.”
That is nearly an impossible battle for parents, Murthy argued. If platforms are allowed to ignore harms to kids while pursuing financial gains by developing features that are laser-focused on maximizing young users’ online engagement, platforms will “likely” perpetuate the cycle of problematic use that Murthy described in his op-ed, the American Psychological Association (APA) warned this year.
Downplayed in Murthy’s op-ed, however, is the fact that social media use is not universally harmful to kids and can be beneficial to some, especially children in marginalized groups. Monitoring this tension remains a focal point of the APA’s most recent guidance, which noted that in April 2024 that “society continues to wrestle with ways to maximize the benefits of these platforms while protecting youth from the potential harms associated with them.”
“Psychological science continues to reveal benefits from social media use, as well as risks and opportunities that certain content, features, and functions present to young social media users,” APA reported.
According to the APA, platforms urgently need to enact responsible safety standards that diminish risks without restricting kids’ access to beneficial social media use.
“By early 2024, few meaningful changes to social media platforms had been enacted by industry, and no federal policies had been adopted,” the APA report said. “There remains a need for social media companies to make fundamental changes to their platforms.”
The APA has recommended a range of platform reforms, including limiting infinite scroll, imposing time limits on young users, reducing kids’ push notifications, and adding protections to shield kids from malicious actors.
Bejar agreed with the APA that platforms owe it to parents to make meaningful reforms. His ideal future would see platforms gathering more granular feedback from young users to expose harms and confront them faster. He provided senators with recommendations that platforms could use to “radically improve the experience of our children on social media” without “eliminating the joy and value they otherwise get from using such services” and without “significantly” affecting profits.
Bejar’s reforms included platforms providing young users with open-ended ways to report harassment, abuse, and harmful content that allow users to explain exactly why a contact or content was unwanted—rather than platforms limiting feedback to certain categories they want to track. This could help ensure that companies that strategically limit language in reporting categories don’t obscure the harms and also provide platforms with more information to improve services, Bejar suggested.
By improving feedback mechanisms, Bejar said, platforms could more easily adjust kids’ feeds to stop recommending unwanted content. The APA’s report agreed that this was an obvious area for platform improvement, finding that “the absence of clear and transparent processes for addressing reports of harmful content makes it harder for youth to feel protected or able to get help in the face of harmful content.”
Ultimately, the APA, Bejar, and Murthy all seem to agree that it is important to bring in outside experts to help platforms come up with better solutions, especially as technology advances. The APA warned that “AI-recommended content has the potential to be especially influential and hard to resist” for some of the youngest users online (ages 10–13).
Photos of Brazilian kids—sometimes spanning their entire childhood—have been used without their consent to power AI tools, including popular image generators like Stable Diffusion, Human Rights Watch (HRW) warned on Monday.
This act poses urgent privacy risks to kids and seems to increase risks of non-consensual AI-generated images bearing their likenesses, HRW’s report said.
An HRW researcher, Hye Jung Han, helped expose the problem. She analyzed “less than 0.0001 percent” of LAION-5B, a dataset built from Common Crawl snapshots of the public web. The dataset does not contain the actual photos but includes image-text pairs derived from 5.85 billion images and captions posted online since 2008.
Among those images linked in the dataset, Han found 170 photos of children from at least 10 Brazilian states. These were mostly family photos uploaded to personal and parenting blogs most Internet surfers wouldn’t easily stumble upon, “as well as stills from YouTube videos with small view counts, seemingly uploaded to be shared with family and friends,” Wired reported.
LAION, the German nonprofit that created the dataset, has worked with HRW to remove the links to the children’s images in the dataset.
That may not completely resolve the problem, though. HRW’s report warned that the removed links are “likely to be a significant undercount of the total amount of children’s personal data that exists in LAION-5B.” Han told Wired that she fears that the dataset may still be referencing personal photos of kids “from all over the world.”
Removing the links also does not remove the images from the public web, where they can still be referenced and used in other AI datasets, particularly those relying on Common Crawl, LAION’s spokesperson, Nate Tyler, told Ars.
“This is a larger and very concerning issue, and as a nonprofit, volunteer organization, we will do our part to help,” Tyler told Ars.
Han told Ars that “Common Crawl should stop scraping children’s personal data, given the privacy risks involved and the potential for new forms of misuse.”
According to HRW’s analysis, many of the Brazilian children’s identities were “easily traceable,” due to children’s names and locations being included in image captions that were processed when building the LAION dataset.
And at a time when middle and high school-aged students are at greater risk of being targeted by bullies or bad actors turning “innocuous photos” into explicit imagery, it’s possible that AI tools may be better equipped to generate AI clones of kids whose images are referenced in AI datasets, HRW suggested.
“The photos reviewed span the entirety of childhood,” HRW’s report said. “They capture intimate moments of babies being born into the gloved hands of doctors, young children blowing out candles on their birthday cake or dancing in their underwear at home, students giving a presentation at school, and teenagers posing for photos at their high school’s carnival.”
There is less risk that the Brazilian kids’ photos are currently powering AI tools since “all publicly available versions of LAION-5B were taken down” in December, Tyler told Ars. That decision came out of an “abundance of caution” after a Stanford University report “found links in the dataset pointing to illegal content on the public web,” Tyler said, including 3,226 suspected instances of child sexual abuse material.
Han told Ars that “the version of the dataset that we examined pre-dates LAION’s temporary removal of its dataset in December 2023.” The dataset will not be available again until LAION determines that all flagged illegal content has been removed.
“LAION is currently working with the Internet Watch Foundation, the Canadian Centre for Child Protection, Stanford, and Human Rights Watch to remove all known references to illegal content from LAION-5B,” Tyler told Ars. “We are grateful for their support and hope to republish a revised LAION-5B soon.”
In Brazil, “at least 85 girls” have reported classmates harassing them by using AI tools to “create sexually explicit deepfakes of the girls based on photos taken from their social media profiles,” HRW reported. Once these explicit deepfakes are posted online, they can inflict “lasting harm,” HRW warned, potentially remaining online for their entire lives.
“Children should not have to live in fear that their photos might be stolen and weaponized against them,” Han said. “The government should urgently adopt policies to protect children’s data from AI-fueled misuse.”
Ars could not immediately reach Stable Diffusion maker Stability AI for comment.
On Monday, Florida became the first state to ban kids under 14 from social media without parental permission. It appears likely that the law—considered one of the most restrictive in the US—will face significant legal challenges, however, before taking effect on January 1.
Under HB 3, apps like Instagram, Snapchat, or TikTok would need to verify the ages of users, then delete any accounts for users under 14 when parental consent is not granted. Companies that “knowingly or recklessly” fail to block underage users risk fines of up to $10,000 in damages to anyone suing on behalf of child users. They could also be liable for up to $50,000 per violation in civil penalties.
In a statement, Florida governor Ron DeSantis said the “landmark law” gives “parents a greater ability to protect their children” from a variety of social media harm. Florida House Speaker Paul Renner, who spearheaded the law, explained some of that harm, saying that passing HB 3 was critical because “the Internet has become a dark alley for our children where predators target them and dangerous social media leads to higher rates of depression, self-harm, and even suicide.”
But tech groups critical of the law have suggested that they are already considering suing to block it from taking effect.
In a statement provided to Ars, a nonprofit opposing the law, the Computer & Communications Industry Association (CCIA) said that while CCIA “supports enhanced privacy protections for younger users online,” it is concerned that “any commercially available age verification method that may be used by a covered platform carries serious privacy and security concerns for users while also infringing upon their First Amendment protections to speak anonymously.”
“This law could create substantial obstacles for young people seeking access to online information, a right afforded to all Americans regardless of age,” Khara Boender, CCIA’s state policy director, warned. “It’s foreseeable that this legislation may face legal opposition similar to challenges seen in other states.”
Carl Szabo, vice president and general counsel for Netchoice—a trade association with members including Meta, TikTok, and Snap—went even further, warning that Florida’s “unconstitutional law will protect exactly zero Floridians.”
Szabo suggested that there are “better ways to keep Floridians, their families, and their data safe and secure online without violating their freedoms.” Democratic state house representative Anna Eskamani opposed the bill, arguing that “instead of banning social media access, it would be better to ensure improved parental oversight tools, improved access to data to stop bad actors, alongside major investments in Florida’s mental health systems and programs.”
Netchoice expressed “disappointment” that DeSantis agreed to sign a law requiring an “ID for the Internet” after “his staunch opposition to this idea both on the campaign trail” and when vetoing a prior version of the bill.
“HB 3 in effect will impose an ‘ID for the Internet’ on any Floridian who wants to use an online service—no matter their age,” Szabo said, warning of invasive data collection needed to verify that a user is under 14 or a parent or guardian of a child under 14.
“This level of data collection will put Floridians’ privacy and security at risk, and it violates their constitutional rights,” Szabo said, noting that in court rulings in Arkansas, California, and Ohio over similar laws, “each of the judges noted the similar laws’ constitutional and privacy problems.”