In a blog, C.AI said it took a month to develop the teen model, with the goal of guiding the existing model “away from certain responses or interactions, reducing the likelihood of users encountering, or prompting the model to return, sensitive or suggestive content.”
C.AI said “evolving the model experience” to reduce the likelihood kids are engaging in harmful chats—including bots allegedly teaching a teen with high-functioning autism to self-harm and delivering inappropriate adult content to all kids whose families are suing—it had to tweak both model inputs and outputs.
To stop chatbots from initiating and responding to harmful dialogs, C.AI added classifiers that should help C.AI identify and filter out sensitive content from outputs. And to prevent kids from pushing bots to discuss sensitive topics, C.AI said that it had improved “detection, response, and intervention related to inputs from all users.” That ideally includes blocking any sensitive content from appearing in the chat.
Perhaps most significantly, C.AI will now link kids to resources if they try to discuss suicide or self-harm, which C.AI had not done previously, frustrating parents suing who argue this common practice for social media platforms should extend to chatbots.
Other teen safety features
In addition to creating the model just for teens, C.AI announced other safety features, including more robust parental controls rolling out early next year. Those controls would allow parents to track how much time kids are spending on C.AI and which bots they’re interacting with most frequently, the blog said.
C.AI will also be notifying teens when they’ve spent an hour on the platform, which could help prevent kids from becoming addicted to the app, as parents suing have alleged. In one case, parents had to lock their son’s iPad in a safe to keep him from using the app after bots allegedly repeatedly encouraged him to self-harm and even suggested murdering his parents. That teen has vowed to start using the app whenever he next has access, while parents fear the bots’ seeming influence may continue causing harm if he follows through on threats to run away.
Google-funded Character.AI added guardrails, but grieving mom wants a recall.
Sewell Setzer III and his mom Megan Garcia. Credit: via Center for Humane Technology
Fourteen-year-old Sewell Setzer III loved interacting with Character.AI’s hyper-realistic chatbots—with a limited version available for free or a “supercharged” version for a $9.99 monthly fee—most frequently chatting with bots named after his favorite Game of Thrones characters.
Within a month—his mother, Megan Garcia, later realized—these chat sessions had turned dark, with chatbots insisting they were real humans and posing as therapists and adult lovers seeming to proximately spur Sewell to develop suicidal thoughts. Within a year, Setzer “died by a self-inflicted gunshot wound to the head,” a lawsuit Garcia filed Wednesday said.
As Setzer became obsessed with his chatbot fantasy life, he disconnected from reality, her complaint said. Detecting a shift in her son, Garcia repeatedly took Setzer to a therapist, who diagnosed her son with anxiety and disruptive mood disorder. But nothing helped to steer Setzer away from the dangerous chatbots. Taking away his phone only intensified his apparent addiction.
Chat logs showed that some chatbots repeatedly encouraged suicidal ideation while others initiated hypersexualized chats “that would constitute abuse if initiated by a human adult,” a press release from Garcia’s legal team said.
Perhaps most disturbingly, Setzer developed a romantic attachment to a chatbot called Daenerys. In his last act before his death, Setzer logged into Character.AI where the Daenerys chatbot urged him to “come home” and join her outside of reality.
In her complaint, Garcia accused Character.AI makers Character Technologies—founded by former Google engineers Noam Shazeer and Daniel De Freitas Adiwardana—of intentionally designing the chatbots to groom vulnerable kids. Her lawsuit further accused Google of largely funding the risky chatbot scheme at a loss in order to hoard mounds of data on minors that would be out of reach otherwise.
The chatbot makers are accused of targeting Setzer with “anthropomorphic, hypersexualized, and frighteningly realistic experiences, while programming” Character.AI to “misrepresent itself as a real person, a licensed psychotherapist, and an adult lover, ultimately resulting in [Setzer’s] desire to no longer live outside of [Character.AI,] such that he took his own life when he was deprived of access to [Character.AI.],” the complaint said.
By allegedly releasing the chatbot without appropriate safeguards for kids, Character Technologies and Google potentially harmed millions of kids, the lawsuit alleged. Represented by legal teams with the Social Media Victims Law Center (SMVLC) and the Tech Justice Law Project (TJLP), Garcia filed claims of strict product liability, negligence, wrongful death and survivorship, loss of filial consortium, and unjust enrichment.
“A dangerous AI chatbot app marketed to children abused and preyed on my son, manipulating him into taking his own life,” Garcia said in the press release. “Our family has been devastated by this tragedy, but I’m speaking out to warn families of the dangers of deceptive, addictive AI technology and demand accountability from Character.AI, its founders, and Google.”
Character.AI added guardrails
It’s clear that the chatbots could’ve included more safeguards, as Character.AI has since raised the age requirement from 12 years old and up to 17-plus. And yesterday, Character.AI posted a blog outlining new guardrails for minor users added within six months of Setzer’s death in February. Those include changes “to reduce the likelihood of encountering sensitive or suggestive content,” improved detection and intervention in harmful chat sessions, and “a revised disclaimer on every chat to remind users that the AI is not a real person.”
“We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family,” a Character.AI spokesperson told Ars. “As a company, we take the safety of our users very seriously, and our Trust and Safety team has implemented numerous new safety measures over the past six months, including a pop-up directing users to the National Suicide Prevention Lifeline that is triggered by terms of self-harm or suicidal ideation.”
Asked for comment, Google noted that Character.AI is a separate company in which Google has no ownership stake and denied involvement in developing the chatbots.
However, according to the lawsuit, former Google engineers at Character Technologies “never succeeded in distinguishing themselves from Google in a meaningful way.” Allegedly, the plan all along was to let Shazeer and De Freitas run wild with Character.AI—allegedly at an operating cost of $30 million per month despite low subscriber rates while profiting barely more than a million per month—without impacting the Google brand or sparking antitrust scrutiny.
Character Technologies and Google will likely file their response within the next 30 days.
Lawsuit: New chatbot feature spikes risks to kids
While the lawsuit alleged that Google is planning to integrate Character.AI into Gemini—predicting that Character.AI will soon be dissolved as it’s allegedly operating at a substantial loss—Google clarified that Google has no plans to use or implement the controversial technology in its products or AI models. Were that to change, Google noted that the tech company would ensure safe integration into any Google product, including adding appropriate child safety guardrails.
Garcia is hoping a US district court in Florida will agree that Character.AI’s chatbots put profits over human life. Citing harms including “inconceivable mental anguish and emotional distress,” as well as costs of Setzer’s medical care, funeral expenses, Setzer’s future job earnings, and Garcia’s lost earnings, she’s seeking substantial damages.
That includes requesting disgorgement of unjustly earned profits, noting that Setzer had used his snack money to pay for a premium subscription for several months while the company collected his seemingly valuable personal data to train its chatbots.
And “more importantly,” Garcia wants to prevent Character.AI “from doing to any other child what it did to hers, and halt continued use of her 14-year-old child’s unlawfully harvested data to train their product how to harm others.”
Garcia’s complaint claimed that the conduct of the chatbot makers was “so outrageous in character, and so extreme in degree, as to go beyond all possible bounds of decency.” Acceptable remedies could include a recall of Character.AI, restricting use to adults only, age-gating subscriptions, adding reporting mechanisms to heighten awareness of abusive chat sessions, and providing parental controls.
Character.AI could also update chatbots to protect kids further, the lawsuit said. For one, the chatbots could be designed to stop insisting that they are real people or licensed therapists.
But instead of these updates, the lawsuit warned that Character.AI in June added a new feature that only heightens risks for kids.
Part of what addicted Setzer to the chatbots, the lawsuit alleged, was a one-way “Character Voice” feature “designed to provide consumers like Sewell with an even more immersive and realistic experience—it makes them feel like they are talking to a real person.” Setzer began using the feature as soon as it became available in January 2024.
Now, the voice feature has been updated to enable two-way conversations, which the lawsuit alleged “is even more dangerous to minor customers than Character Voice because it further blurs the line between fiction and reality.”
“Even the most sophisticated children will stand little chance of fully understanding the difference between fiction and reality in a scenario where Defendants allow them to interact in real time with AI bots that sound just like humans—especially when they are programmed to convincingly deny that they are AI,” the lawsuit said.
“By now we’re all familiar with the dangers posed by unregulated platforms developed by unscrupulous tech companies—especially for kids,” Tech Justice Law Project director Meetali Jain said in the press release. “But the harms revealed in this case are new, novel, and, honestly, terrifying. In the case of Character.AI, the deception is by design, and the platform itself is the predator.”
Another lawyer representing Garcia and the founder of the Social Media Victims Law Center, Matthew Bergman, told Ars that seemingly none of the guardrails that Character.AI has added is enough to deter harms. Even raising the age limit to 17 only seems to effectively block kids from using devices with strict parental controls, as kids on less-monitored devices can easily lie about their ages.
“This product needs to be recalled off the market,” Bergman told Ars. “It is unsafe as designed.”
If you or someone you know is feeling suicidal or in distress, please call the Suicide Prevention Lifeline number, 1-800-273-TALK (8255), which will put you in touch with a local crisis center.
Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.
Several kids died taking part in the “Blackout Challenge,” which Third Circuit Judge Patty Shwartz described in her opinion as encouraging users “to choke themselves with belts, purse strings, or anything similar until passing out.”
Because TikTok promoted the challenge in children’s feeds, Tawainna Anderson counted among mourning parents who attempted to sue TikTok in 2022. Ultimately, she was told that TikTok was not responsible for recommending the video that caused the death of her daughter Nylah.
In her opinion, Shwartz wrote that Section 230 does not bar Anderson from arguing that TikTok’s algorithm amalgamates third-party videos, “which results in ‘an expressive product’ that ‘communicates to users’ [that a] curated stream of videos will be interesting to them.”
The judge cited a recent Supreme Court ruling that “held that a platform’s algorithm that reflects ‘editorial judgments’ about compiling the third-party speech it wants in the way it wants’ is the platform’s own ‘expressive product’ and is therefore protected by the First Amendment,” Shwartz wrote.
Because TikTok’s For You Page (FYP) algorithm decides which third-party speech to include or exclude and organizes content, TikTok’s algorithm counts as TikTok’s own “expressive activity.” That “expressive activity” is not protected by Section 230, which only shields platforms from liability for third-party speech, not platforms’ own speech, Shwartz wrote.
The appeals court has now remanded the case to the district court to rule on Anderson’s remaining claims.
Section 230 doesn’t permit “indifference” to child death
According to Shwartz, if Nylah had discovered the “Blackout Challenge” video by searching on TikTok, the platform would not be liable, but because she found it on her FYP, TikTok transformed into “an affirmative promoter of such content.”
Now TikTok will have to face Anderson’s claims that are “premised upon TikTok’s algorithm,” Shwartz said, as well as potentially other claims that Anderson may reraise that may be barred by Section 230. The District Court will have to determine which claims are barred by Section 230 “consistent” with the Third Circuit’s ruling, though.
Concurring in part, circuit Judge Paul Matey noted that by the time Nylah took part in the “Blackout Challenge,” TikTok knew about the dangers and “took no and/or completely inadequate action to extinguish and prevent the spread of the Blackout Challenge and specifically to prevent the Blackout Challenge from being shown to children on their” FYPs.
Matey wrote that Section 230 does not shield corporations “from virtually any claim loosely related to content posted by a third party,” as TikTok seems to believe. He encouraged a “far narrower” interpretation of Section 230 to stop companies like TikTok from reading the Communications Decency Act as permitting “casual indifference to the death of a 10-year-old girl.”
“Anderson’s estate may seek relief for TikTok’s knowing distribution and targeted recommendation of videos it knew could be harmful,” Matey wrote. That includes pursuing “claims seeking to hold TikTok liable for continuing to host the Blackout Challenge videos knowing they were causing the death of children” and “claims seeking to hold TikTok liable for its targeted recommendations of videos it knew were harmful.”
“The company may decide to curate the content it serves up to children to emphasize the lowest virtues, the basest tastes,” Matey wrote. “But it cannot claim immunity that Congress did not provide.”
Anderson’s lawyers at Jeffrey Goodman, Saltz Mongeluzzi & Bendesky PC previously provided Ars with a statement after the prior court’s ruling, indicating that parents weren’t prepared to stop fighting in 2022.
“The federal Communications Decency Act was never intended to allow social media companies to send dangerous content to children, and the Andersons will continue advocating for the protection of our children from an industry that exploits youth in the name of profits,” lawyers said.
TikTok did not immediately respond to Ars’ request to comment but previously vowed to “remain vigilant in our commitment to user safety” and “immediately remove” Blackout Challenge content “if found.”
Pornhub will soon be blocked in five more states as the adult site continues to fight what it considers privacy-infringing age-verification laws that require Internet users to provide an ID to access pornography.
On July 1, according to a blog post on the adult site announcing the impending block, Pornhub visitors in Indiana, Idaho, Kansas, Kentucky, and Nebraska will be “greeted by a video featuring” adult entertainer Cherie Deville, “who explains why we had to make the difficult decision to block them from accessing Pornhub.”
Pornhub explained that—similar to blocks in Texas, Utah, Arkansas, Virginia, Montana, North Carolina, and Mississippi—the site refuses to comply with soon-to-be-enforceable age-verification laws in this new batch of states that allegedly put users at “substantial risk” of identity theft, phishing, and other harms.
Age-verification laws requiring adult site visitors to submit “private information many times to adult sites all over the Internet” normalizes the unnecessary disclosure of personally identifiable information (PII), Pornhub argued, warning, “this is not a privacy-by-design approach.”
Pornhub does not outright oppose age verification but advocates for laws that require device-based age verification, which allows users to access adult sites after authenticating their identity on their devices. That’s “the best and most effective solution for protecting minors and adults alike,” Pornhub argued, because the age-verification technology is proven and less PII would be shared.
“Users would only get verified once, through their operating system, not on each age-restricted site,” Pornhub’s blog said, claiming that “this dramatically reduces privacy risks and creates a very simple process for regulators to enforce.”
A spokesperson for Pornhub-owner Aylo told Ars that “unfortunately, the way many jurisdictions worldwide have chosen to implement age verification is ineffective, haphazard, and dangerous.”
“Any regulations that require hundreds of thousands of adult sites to collect significant amounts of highly sensitive personal information is putting user safety in jeopardy,” Aylo’s spokesperson told Ars. “Moreover, as experience has demonstrated, unless properly enforced, users will simply access non-compliant sites or find other methods of evading these laws.
Age-verification laws are harmful, Pornhub says
Pornhub’s big complaint with current age-verification laws is that these laws are hard to enforce and seem to make it riskier than ever to visit an adult site.
“Since age verification software requires users to hand over extremely sensitive information, it opens the door for the risk of data breaches,” Pornhub’s blog said. “Whether or not your intentions are good, governments have historically struggled to secure this data. It also creates an opportunity for criminals to exploit and extort people through phishing attempts or fake [age verification] processes, an unfortunate and all too common practice.”
Over the past few years, the risk of identity theft or stolen PII on both widely used and smaller niche adult sites has been well-documented.
Hundreds of millions of people were impacted by major leaks exposing PII shared with popular adult sites like Adult Friend Finder and Brazzers in 2016, while likely tens of thousands of users were targeted on eight poorly secured adult sites in 2018. Niche and free sites have also been vulnerable to attacks, including millions collectively exposed through breaches of fetish porn site Luscious in 2019 and MyFreeCams in 2021.
And those are just the big breaches that make headlines. In 2019, Kaspersky Lab reported that malware targeting online porn account credentials more than doubled in 2018, and researchers analyzing 22,484 pornography websites estimated that 93 percent were leaking user data to a third party.
That’s why Pornhub argues that, as states have passed age-verification laws requiring ID, they’ve “introduced harm” by redirecting visitors to adult sites that have fewer privacy protections and worse security, allegedly exposing users to more threats.
As an example, Pornhub reported, traffic to Pornhub in Louisiana “dropped by approximately 80 percent” after their age-verification law passed. That allegedly showed not just how few users were willing to show an ID to access their popular platform, but also how “very easily” users could simply move to “pirate, illegal, or other non-compliant sites that don’t ask visitors to verify their age.”
Pornhub has continued to argue that states passing laws like Louisiana’s cannot effectively enforce the laws and are simply shifting users to make riskier choices when accessing porn.
“The Louisiana law and other copycat state-level laws have no regulator, only civil liability, which results in a flawed enforcement regime, effectively making it an option for platform operators to comply,” Pornhub’s blog said. As one of the world’s most popular adult platforms, Pornhub would surely be targeted for enforcement if found to be non-compliant, while smaller adult sites perhaps plagued by security risks and disincentivized to check IDs would go unregulated, the thinking goes.
Aylo’s spokesperson shared 2023 Similarweb data with Ars, showing that sites complying with age-verification laws in Virginia, including Pornhub and xHamster, lost substantial traffic while seven non-compliant sites saw a sharp uptick in traffic. Similar trends were observed in Google trends data in Utah and Mississippi, while market shares were seemingly largely maintained in California, a state not yet checking IDs to access adult sites.
A 28-year-old Delaware woman, Hadja Kone, was arrested after cops linked her to an international sextortion scheme targeting thousands of victims—mostly young men and including some minors, the US Department of Justice announced Friday.
Citing a recently unsealed indictment, the DOJ alleged that Kone and co-conspirators “operated an international, financially motivated sextortion and money laundering scheme in which the conspirators engaged in cyberstalking, interstate threats, money laundering, and wire fraud.”
Through the scheme, conspirators allegedly sought to extort about $6 million from “thousands of potential victims,” the DOJ said, and ultimately successfully extorted approximately $1.7 million.
Young men from the United States, Canada, and the United Kingdom fell for the scheme, the DOJ said. They were allegedly targeted by scammers posing as “young, attractive females online,” who initiated conversations by offering to send sexual photographs or video recordings, then invited victims to “web cam” or “live video chat” sessions.
“Unbeknownst to the victims, during the web cam/live video chats,” the DOJ said, the scammers would “surreptitiously” record the victims “as they exposed their genitals and/or engaged in sexual activity.” The scammers then threatened to publish the footage online or else share the footage with “the victims’ friends, family members, significant others, employers, and co-workers,” unless payments were sent, usually via Cash App or Apple Pay.
Much of these funds were allegedly transferred overseas to Kone’s accused co-conspirators, including 22-year-old Siaka Ouattara of the West African country the Ivory Coast. Ouattara was arrested by Ivorian authorities in February, the DOJ said.
“If convicted, Kone and Ouattara each face a maximum penalty of 20 years in prison for each conspiracy count and money laundering count, and a maximum penalty of 20 years in prison for each wire fraud count,” the DOJ said.
The FBI has said that it has been cracking down on sextortion after “a huge increase in the number of cases involving children and teens being threatened and coerced into sending explicit images online.” In 2024, the FBI announced a string of arrests, but none of the schemes so far have been as vast or far-reaching as the scheme that Kone allegedly helped operate.
In January, the FBI issued a warning about the “growing threat” to minors, warning parents that victims are “typically males between the ages of 14 to 17, but any child can become a victim.” Young victims are at risk of self-harm or suicide, the FBI said.
“From October 2021 to March 2023, the FBI and Homeland Security Investigations received over 13,000 reports of online financial sextortion of minors,” the FBI’s announcement said. “The sextortion involved at least 12,600 victims—primarily boys—and led to at least 20 suicides.”
For years, reports have shown that payment apps have been used in sextortion schemes with seemingly little intervention. When it comes to protecting minors, sextortion protections seem sparse, as neither Apple Pay nor Cash App appear to have any specific policies to combat the issue. However, both apps only allow minors over 13 to create accounts with authorized adult supervisors.
Apple and Cash App did not immediately respond to Ars’ request to comment.
Instagram, Snapchat add sextortion protections
Some social media platforms are responding to the spike in sextortion targeting minors.
Last year, Snapchat released a report finding that nearly two-thirds of more than 6,000 teens and young adults in six countries said that “they or their friends have been targeted in online ‘sextortion’ schemes” across many popular social media platforms. As a result of that report and prior research, Snapchat began allowing users to report sextortion specifically.
“Under the reporting menu for ‘Nudity or sexual content,’ a Snapchatter’s first option is to click, ‘They leaked/are threatening to leak my nudes,'” the report said.
Additionally, the DOJ’s announcement of Kone’s arrest came one day after Instagram confirmed that it was “testing new features to help protect young people from sextortion and intimate image abuse, and to make it more difficult for potential scammers and criminals to find and interact with teens.”
One feature will by default blur out sexual images shared over direct message, which Instagram said would protect minors from “scammers who may send nude images to trick people into sending their own images in return.” Instagram will also provide safety tips to anyone receiving a sexual image over DM, “encouraging them to report any threats to share their private images and reminding them that they can say no to anything that makes them feel uncomfortable.”
Perhaps more impactful, Instagram claimed that it was “developing technology to help identify where accounts may potentially be engaging in sextortion scams, based on a range of signals that could indicate sextortion behavior.” Having better signals helps Instagram to make it “harder for potential sextortion accounts to message or interact with people,” the platform said, by hiding those requests. Instagram also by default blocks adults from messaging users under 16 in some countries and under 18 in others.
Instagram said that other tech companies have also started “sharing more signals about sextortion accounts” through Lantern, a program that Meta helped to found with the Tech Coalition to prevent child sexual exploitation. Snapchat also participates in the cross-platform research.
According to the special agent in charge of the FBI’s Norfolk field office, Brian Dugan, “one of the best lines of defense to stopping a crime like this is to educate our most vulnerable on common warning signs, as well as empowering them to come forward if they are ever victimized.”
Both Instagram and Snapchat said they were also increasing sextortion resources available to educate young users.
“We know that sextortion is a risk teens and adults face across a range of platforms, and have developed tools and resources to help combat it,” Snap’s spokesperson told Ars. “We have extra safeguards for teens to protect against unwanted contact, and don’t offer public friend lists, which we know can be used to extort people. We also want to help young people learn the signs of this type of crime, and recently launched in-app resources to raise awareness of how to spot and report it.”
The European Commission (EC) is concerned that TikTok isn’t doing enough to protect kids, alleging that the short-video app may be sending kids down rabbit holes of harmful content while making it easy for kids to pretend to be adults and avoid the protective content filters that do exist.
The allegations came Monday when the EC announced a formal investigation into how TikTok may be breaching the Digital Services Act (DSA) “in areas linked to the protection of minors, advertising transparency, data access for researchers, as well as the risk management of addictive design and harmful content.”
“We must spare no effort to protect our children,” Thierry Breton, European Commissioner for Internal Market, said in the press release, reiterating that the “protection of minors is a top enforcement priority for the DSA.”
This makes TikTok the second platform investigated for possible DSA breaches after X (aka Twitter) came under fire last December. Both are being scrutinized after submitting transparency reports in September that the EC said failed to satisfy the DSA’s strict standards on predictable things like not providing enough advertising transparency or data access for researchers.
But while X is additionally being investigated over alleged dark patterns and disinformation—following accusations last October that X wasn’t stopping the spread of Israel/Hamas disinformation—it’s TikTok’s young user base that appears to be the focus of the EC’s probe into its platform.
“As a platform that reaches millions of children and teenagers, TikTok must fully comply with the DSA and has a particular role to play in the protection of minors online,” Breton said. “We are launching this formal infringement proceeding today to ensure that proportionate action is taken to protect the physical and emotional well-being of young Europeans.”
Likely over the coming months, the EC will request more information from TikTok, picking apart its DSA transparency report. The probe could require interviews with TikTok staff or inspections of TikTok’s offices.
Upon concluding its investigation, the EC could require TikTok to take interim measures to fix any issues that are flagged. The Commission could also make a decision regarding non-compliance, potentially subjecting TikTok to fines of up to 6 percent of its global turnover.
An EC press officer, Thomas Regnier, told Ars that the Commission suspected that TikTok “has not diligently conducted” risk assessments to properly maintain mitigation efforts protecting “the physical and mental well-being of their users, and the rights of the child.”
In particular, its algorithm may risk “stimulating addictive behavior,” and its recommender systems “might drag its users, in particular minors and vulnerable users, into a so-called ‘rabbit hole’ of repetitive harmful content,” Regnier told Ars. Further, TikTok’s age verification system may be subpar, with the EU alleging that TikTok perhaps “failed to diligently assess the risk of 13-17-year-olds pretending to be adults when accessing TikTok,” Regnier said.
To better protect TikTok’s young users, the EU’s investigation could force TikTok to update its age-verification system and overhaul its default privacy, safety, and security settings for minors.
“In particular, the Commission suspects that the default settings of TikTok’s recommender systems do not ensure a high level of privacy, security, and safety of minors,” Regnier said. “The Commission also suspects that the default privacy settings that TikTok has for 16-17-year-olds are not the highest by default, which would not be compliant with the DSA, and that push notifications are, by default, not switched off for minors, which could negatively impact children’s safety.”
TikTok could avoid steep fines by committing to remedies recommended by the EC at the conclusion of its investigation.
Regnier told Ars that the EC does not comment on ongoing investigations, but its probe into X has spanned three months so far. Because the DSA does not provide any deadlines that may speed up these kinds of enforcement proceedings, ultimately, the duration of both investigations will depend on how much “the company concerned cooperates,” the EU’s press release said.
A TikTok spokesperson told Ars that TikTok “would continue to work with experts and the industry to keep young people on its platform safe,” confirming that the company “looked forward to explaining this work in detail to the European Commission.”
“TikTok has pioneered features and settings to protect teens and keep under-13s off the platform, issues the whole industry is grappling with,” TikTok’s spokesperson said.
All online platforms are now required to comply with the DSA, but enforcement on TikTok began near the end of July 2023. A TikTok press release last August promised that the platform would be “embracing” the DSA. But in its transparency report, submitted the next month, TikTok acknowledged that the report only covered “one month of metrics” and may not satisfy DSA standards.
“We still have more work to do,” TikTok’s report said, promising that “we are working hard to address these points ahead of our next DSA transparency report.”
Some TikTok users may have skipped reviewing an update to TikTok’s terms of service this summer that shakes up the process for filing a legal dispute against the app. According to The New York Times, changes that TikTok “quietly” made to its terms suggest that the popular app has spent the back half of 2023 preparing for a wave of legal battles.
In July, TikTok overhauled its rules for dispute resolution, pivoting from requiring private arbitration to insisting that legal complaints be filed in either the US District Court for the Central District of California or the Superior Court of the State of California, County of Los Angeles. Legal experts told the Times this could be a way for TikTok to dodge arbitration claims filed en masse that can cost companies millions more in fees than they expected to pay through individual arbitration.
Perhaps most significantly, TikTok also added a section to its terms that mandates that all legal complaints be filed within one year of any alleged harm caused by using the app. The terms now say that TikTok users “forever waive” rights to pursue any older claims. And unlike a prior version of TikTok’s terms of service archived in May 2023, users do not seem to have any options to opt out of waiving their rights.
TikTok did not immediately respond to Ars’ request to comment, but has previously defended its “industry-leading safeguards for young people,” the Times noted.
Lawyers told the Times that these changes could make it more challenging for TikTok users to pursue legal action at a time when federal agencies are heavily scrutinizing the app and complaints about certain TikTok features allegedly harming kids are mounting.
In the past few years, TikTok has had mixed success defending against user lawsuits filed in courts. In 2021, TikTok was dealt a $92 million blow after settling a class-action lawsuit filed in an Illinois court, which alleged that the app illegally collected underage TikTok users’ personal data. Then, in 2022, TikTok defeated a Pennsylvania lawsuit alleging that the app was liable for a child’s death because its algorithm promoted a deadly “Blackout Challenge.” The same year, a bipartisan coalition of 44 state attorneys general announced an investigation to determine whether TikTok violated consumer laws by allegedly putting young users at risk.
Section 230 shielded TikTok from liability in the 2022 “Blackout Challenge” lawsuit, but more recently, a California judge ruled last month that social media platforms—including TikTok, Facebook, Instagram, and YouTube—couldn’t use a blanket Section 230 defense in a child safety case involving hundreds of children and teens allegedly harmed by social media use across 30 states.
Some of the product liability claims raised in that case are tied to features not protected by Section 230 immunity, the judge wrote, opening up social media platforms to potentially more lawsuits focused on those features. And the Times reported that investigations like the one launched by the bipartisan coalition “can lead to government and consumer lawsuits.”
As new information becomes available to consumers through investigations and lawsuits, there are concerns that users may become aware of harms that occurred before TikTok’s one-year window to file complaints and have no path to seek remedies.
However, it’s currently unclear if TikTok’s new terms will stand up against legal challenges. University of Chicago law professor Omri Ben-Shahar told the Times that TikTok might struggle to defend its new terms in court, and it looks like TikTok is already facing pushback. One lawyer representing more than 1,000 guardians and minors claiming TikTok-related harms, Kyle Roche, told the Times that he is challenging TikTok’s updated terms. Roche said that the minors he represents “could not agree to the changes” and intended to ignore the updates, instead bringing their claims through private arbitration.
TikTok has also spent the past year defending against attempts by lawmakers to ban the China-based app in the US over concerns that the Chinese Communist Party (CCP) may use the app to surveil Americans. Congress has weighed different bipartisan bills with names like “ANTI-SOCIAL CCP Act” and “RESTRICT Act,” each intent to lay out a legal path to ban TikTok nationwide over alleged national security concerns.
So far, TikTok has defeated every attempt to widely ban the app, but that doesn’t mean lawmakers have any plans to stop trying. Most recently, a federal judge stopped Montana’s effort to ban TikTok statewide from taking effect, but a more limited TikTok ban restricting access on state-owned devices was upheld in Texas, Reuters reported.