wrongful death

chatbot-that-caused-teen’s-suicide-is-now-more-dangerous-for-kids,-lawsuit-says

Chatbot that caused teen’s suicide is now more dangerous for kids, lawsuit says


“I’ll do anything for you, Dany.”

Google-funded Character.AI added guardrails, but grieving mom wants a recall.

Sewell Setzer III and his mom Megan Garcia. Credit: via Center for Humane Technology

Fourteen-year-old Sewell Setzer III loved interacting with Character.AI’s hyper-realistic chatbots—with a limited version available for free or a “supercharged” version for a $9.99 monthly fee—most frequently chatting with bots named after his favorite Game of Thrones characters.

Within a month—his mother, Megan Garcia, later realized—these chat sessions had turned dark, with chatbots insisting they were real humans and posing as therapists and adult lovers seeming to proximately spur Sewell to develop suicidal thoughts. Within a year, Setzer “died by a self-inflicted gunshot wound to the head,” a lawsuit Garcia filed Wednesday said.

As Setzer became obsessed with his chatbot fantasy life, he disconnected from reality, her complaint said. Detecting a shift in her son, Garcia repeatedly took Setzer to a therapist, who diagnosed her son with anxiety and disruptive mood disorder. But nothing helped to steer Setzer away from the dangerous chatbots. Taking away his phone only intensified his apparent addiction.

Chat logs showed that some chatbots repeatedly encouraged suicidal ideation while others initiated hypersexualized chats “that would constitute abuse if initiated by a human adult,” a press release from Garcia’s legal team said.

Perhaps most disturbingly, Setzer developed a romantic attachment to a chatbot called Daenerys. In his last act before his death, Setzer logged into Character.AI where the Daenerys chatbot urged him to “come home” and join her outside of reality.

In her complaint, Garcia accused Character.AI makers Character Technologies—founded by former Google engineers Noam Shazeer and Daniel De Freitas Adiwardana—of intentionally designing the chatbots to groom vulnerable kids. Her lawsuit further accused Google of largely funding the risky chatbot scheme at a loss in order to hoard mounds of data on minors that would be out of reach otherwise.

The chatbot makers are accused of targeting Setzer with “anthropomorphic, hypersexualized, and frighteningly realistic experiences, while programming” Character.AI to “misrepresent itself as a real person, a licensed psychotherapist, and an adult lover, ultimately resulting in [Setzer’s] desire to no longer live outside of [Character.AI,] such that he took his own life when he was deprived of access to [Character.AI.],” the complaint said.

By allegedly releasing the chatbot without appropriate safeguards for kids, Character Technologies and Google potentially harmed millions of kids, the lawsuit alleged. Represented by legal teams with the Social Media Victims Law Center (SMVLC) and the Tech Justice Law Project (TJLP), Garcia filed claims of strict product liability, negligence, wrongful death and survivorship, loss of filial consortium, and unjust enrichment.

“A dangerous AI chatbot app marketed to children abused and preyed on my son, manipulating him into taking his own life,” Garcia said in the press release. “Our family has been devastated by this tragedy, but I’m speaking out to warn families of the dangers of deceptive, addictive AI technology and demand accountability from Character.AI, its founders, and Google.”

Character.AI added guardrails

It’s clear that the chatbots could’ve included more safeguards, as Character.AI has since raised the age requirement from 12 years old and up to 17-plus. And yesterday, Character.AI posted a blog outlining new guardrails for minor users added within six months of Setzer’s death in February. Those include changes “to reduce the likelihood of encountering sensitive or suggestive content,” improved detection and intervention in harmful chat sessions, and “a revised disclaimer on every chat to remind users that the AI is not a real person.”

“We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family,” a Character.AI spokesperson told Ars. “As a company, we take the safety of our users very seriously, and our Trust and Safety team has implemented numerous new safety measures over the past six months, including a pop-up directing users to the National Suicide Prevention Lifeline that is triggered by terms of self-harm or suicidal ideation.”

Asked for comment, Google noted that Character.AI is a separate company in which Google has no ownership stake and denied involvement in developing the chatbots.

However, according to the lawsuit, former Google engineers at Character Technologies “never succeeded in distinguishing themselves from Google in a meaningful way.” Allegedly, the plan all along was to let Shazeer and De Freitas run wild with Character.AI—allegedly at an operating cost of $30 million per month despite low subscriber rates while profiting barely more than a million per month—without impacting the Google brand or sparking antitrust scrutiny.

Character Technologies and Google will likely file their response within the next 30 days.

Lawsuit: New chatbot feature spikes risks to kids

While the lawsuit alleged that Google is planning to integrate Character.AI into Gemini—predicting that Character.AI will soon be dissolved as it’s allegedly operating at a substantial loss—Google clarified that Google has no plans to use or implement the controversial technology in its products or AI models. Were that to change, Google noted that the tech company would ensure safe integration into any Google product, including adding appropriate child safety guardrails.

Garcia is hoping a US district court in Florida will agree that Character.AI’s chatbots put profits over human life. Citing harms including “inconceivable mental anguish and emotional distress,” as well as costs of Setzer’s medical care, funeral expenses, Setzer’s future job earnings, and Garcia’s lost earnings, she’s seeking substantial damages.

That includes requesting disgorgement of unjustly earned profits, noting that Setzer had used his snack money to pay for a premium subscription for several months while the company collected his seemingly valuable personal data to train its chatbots.

And “more importantly,” Garcia wants to prevent Character.AI “from doing to any other child what it did to hers, and halt continued use of her 14-year-old child’s unlawfully harvested data to train their product how to harm others.”

Garcia’s complaint claimed that the conduct of the chatbot makers was “so outrageous in character, and so extreme in degree, as to go beyond all possible bounds of decency.” Acceptable remedies could include a recall of Character.AI, restricting use to adults only, age-gating subscriptions, adding reporting mechanisms to heighten awareness of abusive chat sessions, and providing parental controls.

Character.AI could also update chatbots to protect kids further, the lawsuit said. For one, the chatbots could be designed to stop insisting that they are real people or licensed therapists.

But instead of these updates, the lawsuit warned that Character.AI in June added a new feature that only heightens risks for kids.

Part of what addicted Setzer to the chatbots, the lawsuit alleged, was a one-way “Character Voice” feature “designed to provide consumers like Sewell with an even more immersive and realistic experience—it makes them feel like they are talking to a real person.” Setzer began using the feature as soon as it became available in January 2024.

Now, the voice feature has been updated to enable two-way conversations, which the lawsuit alleged “is even more dangerous to minor customers than Character Voice because it further blurs the line between fiction and reality.”

“Even the most sophisticated children will stand little chance of fully understanding the difference between fiction and reality in a scenario where Defendants allow them to interact in real time with AI bots that sound just like humans—especially when they are programmed to convincingly deny that they are AI,” the lawsuit said.

“By now we’re all familiar with the dangers posed by unregulated platforms developed by unscrupulous tech companies—especially for kids,” Tech Justice Law Project director Meetali Jain said in the press release. “But the harms revealed in this case are new, novel, and, honestly, terrifying. In the case of Character.AI, the deception is by design, and the platform itself is the predator.”

Another lawyer representing Garcia and the founder of the Social Media Victims Law Center, Matthew Bergman, told Ars that seemingly none of the guardrails that Character.AI has added is enough to deter harms. Even raising the age limit to 17 only seems to effectively block kids from using devices with strict parental controls, as kids on less-monitored devices can easily lie about their ages.

“This product needs to be recalled off the market,” Bergman told Ars. “It is unsafe as designed.”

If you or someone you know is feeling suicidal or in distress, please call the Suicide Prevention Lifeline number, 1-800-273-TALK (8255), which will put you in touch with a local crisis center.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Chatbot that caused teen’s suicide is now more dangerous for kids, lawsuit says Read More »

court:-section-230-doesn’t-shield-tiktok-from-blackout-challenge-death-suit

Court: Section 230 doesn’t shield TikTok from Blackout Challenge death suit

A dent in the Section 230 shield —

TikTok must face claim over For You Page recommending content that killed kids.

Court: Section 230 doesn’t shield TikTok from Blackout Challenge death suit

An appeals court has revived a lawsuit against TikTok by reversing a lower court’s ruling that Section 230 immunity shielded the short video app from liability after a child died taking part in a dangerous “Blackout Challenge.”

Several kids died taking part in the “Blackout Challenge,” which Third Circuit Judge Patty Shwartz described in her opinion as encouraging users “to choke themselves with belts, purse strings, or anything similar until passing out.”

Because TikTok promoted the challenge in children’s feeds, Tawainna Anderson counted among mourning parents who attempted to sue TikTok in 2022. Ultimately, she was told that TikTok was not responsible for recommending the video that caused the death of her daughter Nylah.

In her opinion, Shwartz wrote that Section 230 does not bar Anderson from arguing that TikTok’s algorithm amalgamates third-party videos, “which results in ‘an expressive product’ that ‘communicates to users’ [that a] curated stream of videos will be interesting to them.”

The judge cited a recent Supreme Court ruling that “held that a platform’s algorithm that reflects ‘editorial judgments’ about compiling the third-party speech it wants in the way it wants’ is the platform’s own ‘expressive product’ and is therefore protected by the First Amendment,” Shwartz wrote.

Because TikTok’s For You Page (FYP) algorithm decides which third-party speech to include or exclude and organizes content, TikTok’s algorithm counts as TikTok’s own “expressive activity.” That “expressive activity” is not protected by Section 230, which only shields platforms from liability for third-party speech, not platforms’ own speech, Shwartz wrote.

The appeals court has now remanded the case to the district court to rule on Anderson’s remaining claims.

Section 230 doesn’t permit “indifference” to child death

According to Shwartz, if Nylah had discovered the “Blackout Challenge” video by searching on TikTok, the platform would not be liable, but because she found it on her FYP, TikTok transformed into “an affirmative promoter of such content.”

Now TikTok will have to face Anderson’s claims that are “premised upon TikTok’s algorithm,” Shwartz said, as well as potentially other claims that Anderson may reraise that may be barred by Section 230. The District Court will have to determine which claims are barred by Section 230 “consistent” with the Third Circuit’s ruling, though.

Concurring in part, circuit Judge Paul Matey noted that by the time Nylah took part in the “Blackout Challenge,” TikTok knew about the dangers and “took no and/or completely inadequate action to extinguish and prevent the spread of the Blackout Challenge and specifically to prevent the Blackout Challenge from being shown to children on their” FYPs.

Matey wrote that Section 230 does not shield corporations “from virtually any claim loosely related to content posted by a third party,” as TikTok seems to believe. He encouraged a “far narrower” interpretation of Section 230 to stop companies like TikTok from reading the Communications Decency Act as permitting “casual indifference to the death of a 10-year-old girl.”

“Anderson’s estate may seek relief for TikTok’s knowing distribution and targeted recommendation of videos it knew could be harmful,” Matey wrote. That includes pursuing “claims seeking to hold TikTok liable for continuing to host the Blackout Challenge videos knowing they were causing the death of children” and “claims seeking to hold TikTok liable for its targeted recommendations of videos it knew were harmful.”

“The company may decide to curate the content it serves up to children to emphasize the lowest virtues, the basest tastes,” Matey wrote. “But it cannot claim immunity that Congress did not provide.”

Anderson’s lawyers at Jeffrey Goodman, Saltz Mongeluzzi & Bendesky PC previously provided Ars with a statement after the prior court’s ruling, indicating that parents weren’t prepared to stop fighting in 2022.

“The federal Communications Decency Act was never intended to allow social media companies to send dangerous content to children, and the Andersons will continue advocating for the protection of our children from an industry that exploits youth in the name of profits,” lawyers said.

TikTok did not immediately respond to Ars’ request to comment but previously vowed to “remain vigilant in our commitment to user safety” and “immediately remove” Blackout Challenge content “if found.”

Court: Section 230 doesn’t shield TikTok from Blackout Challenge death suit Read More »

disney-fighting-restaurant-death-suit-with-disney+-terms-“absurd,”-lawyer-says

Disney fighting restaurant death suit with Disney+ terms “absurd,” lawyer says

Raglan Road Irish Pub at Disney Springs in Orlando, Florida, USA.

Enlarge / Raglan Road Irish Pub at Disney Springs in Orlando, Florida, USA.

After a woman, Kanokporn Tangsuan, with severe nut allergies died from anaphylaxis due to a Disney Springs restaurant neglecting to honor requests for allergen-free food, her husband, Jeffrey Piccolo, sued on behalf of her estate.

In May, Disney tried to argue that the wrongful death suit should be dismissed because Piccolo subscribed to a one-month free trial of Disney+ four years before Tangsuan’s shocking death. Fighting back this month, a lawyer representing Tangsuan’s estate, Brian Denney, warned that Disney was “explicitly seeking to bar its 150 million Disney+ subscribers from ever prosecuting a wrongful death case against it in front of a jury even if the case facts have nothing to with Disney+.”

According to Disney, by agreeing to the Disney+ terms, Piccolo also agreed to other Disney terms vaguely hyperlinked in the Disney+ agreement that required private arbitration for “all disputes” against “The Walt Disney Company or its affiliates” arising “in contract, tort, warranty, statute, regulation, or other legal or equitable basis.”

However, Denney argued that “there is simply no reading of the Disney+ Subscriber Agreement, the only Agreement Mr. Piccolo allegedly assented to in creating his Disney+ account, which would support the notion that he was agreeing on behalf of his wife or her estate, to arbitrate injuries sustained by his wife at a restaurant located on premises owned by a Disney theme park or resort from which she died.”

“Frankly, any such suggestion borders on the absurd,” Denney said.

Denney argued that Disney’s motion to compel arbitration was “so outrageously unreasonable and unfair as to shock the judicial conscience.”

It’s particularly shocking, Denney argued, because of a “glaring ambiguity” that Disney “ignores”—that Piccolo more recently agreed to other Disney terms that “directly conflict” with the terms that Disney prefers to reference in its motion.

Denney argued that Disney is “desperately” clinging to “Piccolo’s purported consent to the Disney Terms of Use in November of 2019, because the My Disney Experience Terms and Conditions he allegedly consented to in 2023″—when purchasing tickets on Disney’s website to Epcot that went unused—”do not contain an arbitration provision.”

Those terms instead “rather expressly contemplate that the parties may file lawsuits and requires those suits to be filed in Orange County Florida and to be governed by Florida law,” Denney said. They also specify that the My Disney Experience terms prevail amid any conflict with other terms.

This renders “the arbitration provision in the Disney Terms of Use unenforceable,” Denney argued, requesting Disney’s motion be denied and suggesting that Disney is attempting “to deprive the Estate of Kanokporn Tangsuan of its right to a jury trial.”

He also reminded the court that in nursing home cases, Florida courts have “repeatedly held that a resident’s estate will not be bound by an arbitration agreement signed by a spouse or other family member in their individual capacity.”

Disney is hoping that its Disney+ terms argument will push the litigation out of the court and behind closed doors of arbitration, arguing that “Piccolo’s remaining claims against Great Irish Pubs”—which does business as Raglan Road Irish Pub—”should be stayed as well.” That would be proper, Disney argued, because Piccolo’s claims against Disney “are based entirely on Great Irish Pubs’ alleged misconduct” and “it would be problematic for this litigation to continue since each tribunal may decide the issues differently.”

Disney also noted that the litigation should also be stayed if Great Irish Pubs joined the arbitration, which Disney “would not oppose.”

Denney argued that Disney’s motion to compel arbitration was “fatally flawed for numerous independent reasons.”

“There is not a single authority in Florida that would support such an inane argument,” Denney argued. It’s “preposterous,” he said, that Disney is arguing that “when Jeffrey Piccolo, individually, allegedly signed himself up for a free trial of Disney+ back in 2019 or bought Epcot tickets in 2023, he somehow bound the non-existent Estate of Kanokporn Tangsuan (his wife, who was alive at both times) to an arbitration agreement buried within certain terms and conditions.”

Disney fighting restaurant death suit with Disney+ terms “absurd,” lawyer says Read More »