Policy

cards-against-humanity-lawsuit-forced-spacex-to-vacate-land-on-us/mexico-border

Cards Against Humanity lawsuit forced SpaceX to vacate land on US/Mexico border

A year after suing SpaceX for “invading” a plot of land on the US/Mexico border, Cards Against Humanity says it has obtained a settlement and will provide supporters with a new pack of cards about Elon Musk.

The party-game company bought the land in 2017 in an attempt to stymie President Trump’s wall-building project, but alleged that SpaceX illegally took over the land and filled it with construction equipment and materials. A September 2024 lawsuit filed against SpaceX in Cameron County District Court in Texas sought up to $15 million to cover the cost of restoring the property and other damages.

Cards Against Humanity, which bought the property with donations from supporters, told Ars today that “we’ve been in negotiations with SpaceX for much of the last year. We held out for the best settlement we could get—almost until the trial was supposed to start—and unfortunately part of that negotiation was that we’re not allowed to discuss specific settlement terms. They did admit to trespassing during the discovery phase, which was very validating.”

A court document shows that SpaceX admitted it did not ask for or receive permission to use the property. SpaceX admitted that its “contractors cleared the lot and put down gravel,” parked vehicles on the property, and stored construction materials. An Associated Press article yesterday said that “Texas court records show a settlement was reached in the case last month, just weeks before a jury trial was scheduled to begin on Nov. 3.”

The game company said a victory at trial wouldn’t have resulted in a better outcome. “A trial would have cost more than what we were likely to win from SpaceX,” the company’s statement to Ars said. “Under Texas law, even if we had won at trial (and we would have, given their admission to trespassing), we likely wouldn’t have been able to recoup our legal fees. And SpaceX certainly seemed ready to dramatically outspend us on lawyers.”

“They packed up the space garbage”

The company also provided this update to donors:

Dear Horrible Friends,

Remember last year, when we sued Elon Musk for dumping space garbage all over your land, and then you signed up to collect your share of the proceeds? Also, remember how we warned you that we’d “probably only be able to get you like two dollars or most likely nothing”?

Well, Elon Musk’s team admitted on the record that they illegally trespassed on your land, and then they packed up the space garbage and fucked off. But when it comes to paying you all, he did the legal equivalent of throwing dust in our eyes and kicking us in the balls.

Instead of money, Cards Against Humanity said it will provide its “best, sexiest customers” with a comedic “mini-pack of exclusive cards all about Elon Musk” that can be obtained via this sign-up link. “P.S. Soon, the land will be returned to its natural state: no space garbage, and still completely free of pointless fucking border walls,” the company said.

Cards Against Humanity lawsuit forced SpaceX to vacate land on US/Mexico border Read More »

musk’s-$1-trillion-tesla-pay-plan-draws-some-protest-ahead-of-likely-approval

Musk’s $1 trillion Tesla pay plan draws some protest ahead of likely approval

Ann Lipton, a University of Colorado Law School professor, told the Financial Times that she expects shareholders to approve the latest pay package despite the ISS recommendation. “They recommended against it before and the shareholders voted in favor, and this time Elon Musk gets to vote…  and his brother gets to vote,” she said. “That wasn’t true last time. I strongly expect that all of these proposals are going to go Tesla’s way.”

Pay plan goals are vaguely defined, letter says

The Musk pay plan was also opposed in a letter signed by the American Federation of Teachers; state treasurers from Nevada, Massachusetts, and New Mexico; and comptrollers from New York City and Maryland.

“We believe the Board’s failure to ensure CEO Musk devotes full attention to Tesla, while making him the highest-paid CEO in history, shows how beholden it is to management,” the letter said. “The Board has permitted Mr. Musk to be over-committed for years, allowing him to continue as CEO while taking time-consuming leadership roles at his other companies, xAI/X, SpaceX, Neuralink, and Boring Company.”

The letter said the pay plan’s vehicle-delivery goal could be reached even if annual sales decrease and that the Full Self-Driving subscription goal is “carefully worded to not actually require that the service ever achieves full unsupervised self-driving.”

The letter said the goal of delivering 1 million AI robots or “bots” is so vague that “even if Tesla fails to develop a commercially successful robot, it could market devices developed and manufactured by other firms and still achieve this milestone.” The robotaxi goal similarly “does not require that Tesla has designed and developed the robotaxis in question, nor that their operation be profitable,” the letter said.

The letter faulted the board for letting Musk take “a leadership position at the US Department of Government Efficiency (DOGE), a role widely seen as having a negative impact on the Company’s performance and brand… In our view, the Board’s failure to limit Mr. Musk’s outside endeavors while rewarding him with unprecedented pay packages for only a part-time commitment strongly indicates a lack of true independence by management and jeopardizes long-term shareholder value.”

Musk’s $1 trillion Tesla pay plan draws some protest ahead of likely approval Read More »

big-tech-sues-texas,-says-age-verification-law-is-“broad-censorship-regime”

Big Tech sues Texas, says age-verification law is “broad censorship regime”

Texas minors also challenge law

The Texas App Store Accountability Act is similar to laws enacted by Utah and Louisiana. The Texas law is scheduled to take effect on January 1, 2026, while the Utah and Louisiana laws are set to be enforced starting in May and July, respectively.

The Texas law is also being challenged in a different lawsuit filed by a student advocacy group and two Texas minors.

“The First Amendment does not permit the government to require teenagers to get their parents’ permission before accessing information, except in discrete categories like obscenity,” attorney Ambika Kumar of Davis Wright Tremaine LLP said in an announcement of the lawsuit. “The Constitution also forbids restricting adults’ access to speech in the name of protecting children. This law imposes a system of prior restraint on protected expression that is presumptively unconstitutional.”

Davis Wright Tremaine LLP said the law “extends far beyond social media to mainstream educational, news, and creative applications, including Wikipedia, search apps, and internet browsers; messaging services like WhatsApp and Slack; content libraries like Audible, Kindle, Netflix, Spotify, and YouTube; educational platforms like Coursera, Codecademy, and Duolingo; news apps from The New York Times, The Wall Street Journal, ESPN, and The Atlantic; and publishing tools like Substack, Medium, and CapCut.”

Both lawsuits against Texas argue that the law is preempted by the Supreme Court’s 2011 decision in Brown v. Entertainment Merchants Association, which struck down a California law restricting the sale of violent video games to children. The Supreme Court said in Brown that a state’s power to protect children from harm “does not include a free-floating power to restrict the ideas to which children may be exposed.”

The tech industry has sued Texas over multiple laws related to content moderation. In 2022, the Supreme Court blocked a Texas law that prohibits large social media companies from moderating posts based on a user’s viewpoint. Litigation in that case is ongoing. In a separate case decided in June 2025, the Supreme Court upheld a Texas law that requires age verification on porn sites.

Big Tech sues Texas, says age-verification law is “broad censorship regime” Read More »

teen-sues-to-destroy-the-nudify-app-that-left-her-in-constant-fear

Teen sues to destroy the nudify app that left her in constant fear

A spokesperson told The Wall Street Journal that “nonconsensual pornography and the tools to create it are explicitly forbidden by Telegram’s terms of service and are removed whenever discovered.”

For the teen suing, the prime target remains ClothOff itself. Her lawyers think it’s possible that she can get the app and its affiliated sites blocked in the US, the WSJ reported, if ClothOff fails to respond and the court awards her default judgment.

But no matter the outcome of the litigation, the teen expects to be forever “haunted” by the fake nudes that a high school boy generated without facing any charges.

According to the WSJ, the teen girl sued the boy who she said made her want to drop out of school. Her complaint noted that she was informed that “the individuals responsible and other potential witnesses failed to cooperate with, speak to, or provide access to their electronic devices to law enforcement.”

The teen has felt “mortified and emotionally distraught, and she has experienced lasting consequences ever since,” her complaint said. She has no idea if ClothOff can continue to distribute the harmful images, and she has no clue how many teens may have posted them online. Because of these unknowns, she’s certain she’ll spend “the remainder of her life” monitoring “for the resurfacing of these images.”

“Knowing that the CSAM images of her will almost inevitably make their way onto the Internet and be retransmitted to others, such as pedophiles and traffickers, has produced a sense of hopelessness” and “a perpetual fear that her images can reappear at any time and be viewed by countless others, possibly even friends, family members, future partners, colleges, and employers, or the public at large,” her complaint said.

The teen’s lawsuit is the newest front in a wider attempt to crack down on AI-generated CSAM and NCII. It follows prior litigation filed by San Francisco City Attorney David Chiu last year that targeted ClothOff, among 16 popular apps used to “nudify” photos of mostly women and young girls.

About 45 states have criminalized fake nudes, the WSJ reported, and earlier this year, Donald Trump signed the Take It Down Act into law, which requires platforms to remove both real and AI-generated NCII within 48 hours of victims’ reports.

Teen sues to destroy the nudify app that left her in constant fear Read More »

sony-tells-scotus-that-people-accused-of-piracy-aren’t-“innocent-grandmothers”

Sony tells SCOTUS that people accused of piracy aren’t “innocent grandmothers”

Record labels Sony, Warner, and Universal yesterday asked the Supreme Court to help it boot pirates off the Internet.

Sony and the other labels filed their brief in Cox Communications v. Sony Music Entertainment, a case involving the cable Internet service provider that rebuffed labels’ demands for mass terminations of broadband subscribers accused of repeat copyright infringement. The Supreme Court’s eventual decision in the case may determine whether Internet service providers must terminate the accounts of alleged pirates in order to avoid massive financial liability.

Cox has argued that copyright-infringement notices—which are generated by bots and flag users based on their IP addresses—sent by record labels are unreliable. Cox said ISPs can’t verify whether the notices are accurate and that terminating an account would punish every user in a household where only one person may have illegally downloaded copyrighted files.

Record labels urged the Supreme Court to reject this argument.

“While Cox waxes poetic about the centrality of Internet access to modern life, it neglects to mention that it had no qualms about terminating 619,711 subscribers for nonpayment over the same period that it terminated just 32 for serial copyright abuse,” the labels’ brief said. “And while Cox stokes fears of innocent grandmothers and hospitals being tossed off the Internet for someone else’s infringement, Cox put on zero evidence that any subscriber here fit that bill. By its own admission, the subscribers here were ‘habitual offenders’ Cox chose to retain because, unlike the vast multitude cut off for late payment, they contributed to Cox’s bottom line.”

Record labels were referring to a portion of Cox’s brief that said, “Grandma will be thrown off the Internet because Junior illegally downloaded a few songs on a visit.”

Too much torrenting, record labels say

The record labels’ brief complained about torrents providing Internet users with easy access to pirated material:

Today, most infringement occurs on peer-to-peer file-sharing protocols like BitTorrent, which enable viral uploading and downloading of pirated music faster than ever, leaving behind no fingerprints beyond a ten-digit IP address that only an ISP can tie to the user. Unlike earlier methods of infringement—e.g., bootleg CD manufacturers—peer-to-peer file-sharing protocols lack a central hub that law enforcement can shutter. And unlike earlier methods, peer-to-peer protocols are not constrained by the need to create physical copies; BitTorrent enables tens of thousands of people to trade pirated music simultaneously.

Record labels said that Cox used a “13-strike policy” that let subscribers repeatedly download pirated material before facing any consequences. The case is based on “claims to works infringed by subscribers who generated at least three infringement notices across 2013 and 2014, for a total of 10,017 copyrighted works,” the brief said.

Sony tells SCOTUS that people accused of piracy aren’t “innocent grandmothers” Read More »

openai-thinks-elon-musk-funded-its-biggest-critics—who-also-hate-musk

OpenAI thinks Elon Musk funded its biggest critics—who also hate Musk

“We are not in any way supported by or funded by Elon Musk and have a history of campaigning against him and his interests,” Ruby-Sachs told NBC News.

Another nonprofit watchdog targeted by OpenAI was The Midas Project, which strives to make sure AI benefits everyone. Notably, Musk’s lawsuit accused OpenAI of abandoning its mission to benefit humanity in pursuit of immense profits.

But the founder of The Midas Project, Tyler Johnston, was shocked to see his group portrayed as coordinating with Musk. He posted on X to clarify that Musk had nothing to do with the group’s “OpenAI Files,” which comprehensively document areas of concern with any plan to shift away from nonprofit governance.

His post came after OpenAI’s chief strategy officer, Jason Kwon, wrote that “several organizations, some of them suddenly newly formed like the Midas Project, joined in and ran campaigns” backing Musk’s “opposition to OpenAI’s restructure.”

“What are you talking about?” Johnston wrote. “We were formed 19 months ago. We’ve never spoken with or taken funding from Musk and [his] ilk, which we would have been happy to tell you if you asked a single time. In fact, we’ve said he runs xAI so horridly it makes OpenAI ‘saintly in comparison.’”

OpenAI acting like a “cutthroat” corporation?

Johnston complained that OpenAI’s subpoena had already hurt the Midas Project, as insurers had denied coverage based on news coverage. He accused OpenAI of not just trying to silence critics but possibly shut them down.

“If you wanted to constrain an org’s speech, intimidation would be one strategy, but making them uninsurable is another, and maybe that’s what’s happened to us with this subpoena,” Johnston suggested.

Other nonprofits, like the San Francisco Foundation (SFF) and Encode, accused OpenAI of using subpoenas to potentially block or slow down legal interventions. Judith Bell, SFF’s chief impact officer, told NBC News that her nonprofit’s subpoena came after spearheading a petition to California’s attorney general to block OpenAI’s restructuring. And Encode’s general counsel, Nathan Calvin, was subpoenaed after sponsoring a California safety regulation meant to make it easier to monitor risks of frontier AI.

OpenAI thinks Elon Musk funded its biggest critics—who also hate Musk Read More »

isps-angry-about-california-law-that-lets-renters-opt-out-of-forced-payments

ISPs angry about California law that lets renters opt out of forced payments

Rejecting opposition from the cable and real estate industries, California Gov. Gavin Newsom signed a bill that aims to increase broadband competition in apartment buildings.

The new law taking effect on January 1 says landlords must let tenants “opt out of paying for any subscription from a third-party Internet service provider, such as through a bulk-billing arrangement, to provide service for wired Internet, cellular, or satellite service that is offered in connection with the tenancy.” It was approved by the state Assembly in a 75–0 vote in April, and by the Senate in a 30–7 vote last month.

“This is kind of like a first step in trying to give this industry an opportunity to just treat people fairly,” Assemblymember Rhodesia Ransom, a Democratic lawmaker who authored the bill, told Ars last month. “It’s not super restrictive. We are not banning bulk billing. We’re not even limiting how much money the people can make. What we’re saying here with this bill is that if a tenant wants to opt out of the arrangement, they should be allowed to opt out.”

Ransom said lobby groups for Internet providers and real estate companies were “working really hard” to defeat the bill. The California Broadband & Video Association, which represents cable companies, called it “an anti-affordability bill masked as consumer protection.”

Complaining that property owners would have “to provide a refund to tenants who decline the Internet service provided through the building’s contract with a specific Internet service provider,” the cable group said the law “undermines the basis of the cost savings and will lead to bulk billing being phased out.”

State law fills gap in federal rules

Ransom argued that the bill would boost competition and said that “some of our support came from some of the smaller Internet service providers.”

ISPs angry about California law that lets renters opt out of forced payments Read More »

feds-seize-$15-billion-from-alleged-forced-labor-scam-built-on-“human-suffering”

Feds seize $15 billion from alleged forced labor scam built on “human suffering”

Federal prosecutors have seized $15 billion from the alleged kingpin of an operation that used imprisoned laborers to trick unsuspecting people into making investments in phony funds, often after spending months faking romantic relationships with the victims.

Such “pig butchering” scams have operated for years. They typically work when members of the operation initiate conversations with people on social media and then spend months messaging them. Often, the scammers pose as attractive individuals who feign romantic interest for the victim.

Forced labor, phone farms, and human suffering

Eventually, conversations turn to phony investment funds with the end goal of convincing the victim to transfer large amounts of bitcoin. In many cases, the scammers are trafficked and held against their will in compounds surrounded by fences and barbed wire.

On Tuesday, federal prosecutors unsealed an indictment against Chen Zhi, the founder and chairman of a multinational business conglomerate based in Cambodia. It alleged that Zhi led such a forced-labor scam operation, which, with the help of unnamed co-conspirators, netted billions of dollars from victims.

“The defendant CHEN ZHI and his co-conspirators designed the compounds to maximize profits and personally ensured that they had the necessary infrastructure to reach as many victims as possible,” prosecutors wrote in the court document, filed in US District Court for the Eastern District of New York. The indictment continued:

For example, in or about 2018, Co-Conspirator-1 was involved in procuring millions of mobile telephone numbers and account passwords from an illicit online marketplace. In or about 2019, Co-Conspirator-3 helped oversee construction of the Golden Fortune compound. CHEN himself maintained documents describing and depicting “phone farms,” automated call centers used to facilitate cryptocurrency investment fraud and other cybercrimes, including the below image:

Credit: Justice Department

Prosecutors said Zhi is the founder and chairman of Prince Group, a Cambodian corporate conglomerate that ostensibly operated dozens of legitimate business entities in more than 30 countries. In secret, however, Zhi and top executives built Prince Group into one of Asia’s largest transnational criminal organizations. Zhi’s whereabouts are unknown.

Feds seize $15 billion from alleged forced labor scam built on “human suffering” Read More »

trump-admin-pressured-facebook-into-removing-ice-tracking-group

Trump admin pressured Facebook into removing ICE-tracking group

Attorney General Pam Bondi today said that Facebook removed an ICE-tracking group after “outreach” from the Department of Justice. “Today following outreach from @thejusticedept, Facebook removed a large group page that was being used to dox and target @ICEgov agents in Chicago,” Bondi wrote in an X post.

Bondi alleged that a “wave of violence against ICE has been driven by online apps and social media campaigns designed to put ICE officers at risk just for doing their jobs.” She added that the DOJ “will continue engaging tech companies to eliminate platforms where radicals can incite imminent violence against federal law enforcement.”

When contacted by Ars, Facebook owner Meta said the group “was removed for violating our policies against coordinated harm.” Meta didn’t describe any specific violation but directed us to a policy against “coordinating harm and promoting crime,” which includes a prohibition against “outing the undercover status of law enforcement, military, or security personnel.”

The statement was sent by Francis Brennan, a former Trump campaign advisor who was hired by Meta in January.

The White House recently claimed there has been “a more than 1,000 percent increase in attacks on U.S. Immigration and Customs Enforcement (ICE) officers since January 21, 2025, compared to the same period last year.” Government officials haven’t offered proof of this claim, according to an NPR report that said “there is no public evidence that [attacks] have spiked as dramatically as the federal government has claimed.”

The Justice Department contacted Meta after Laura Loomer sought action against the “ICE Sighting-Chicagoland” group that had over 84,000 members on Facebook. “Fantastic news. DOJ source tells me they have seen my report and they have contacted Facebook and their executives at META to tell them they need to remove these ICE tracking pages from the platform,” Loomer wrote yesterday.

The ICE Sighting-Chicagoland group “has been increasingly used over the last five weeks of ‘Operation Midway Blitz,’ President Donald Trump’s intense deportation campaign, to warn neighbors that federal agents are near schools, grocery stores and other community staples so they can take steps to protect themselves,” the Chicago Sun-Times wrote today.

Trump slammed Biden for social media “censorship”

Trump and Republicans repeatedly criticized the Biden administration for pressuring social media companies into removing content. In a day-one executive order declaring an end to “federal censorship,” Trump said “the previous administration trampled free speech rights by censoring Americans’ speech on online platforms, often by exerting substantial coercive pressure on third parties, such as social media companies, to moderate, deplatform, or otherwise suppress speech that the Federal Government did not approve.”

Trump admin pressured Facebook into removing ICE-tracking group Read More »

openai-unveils-“wellness”-council;-suicide-prevention-expert-not-included

OpenAI unveils “wellness” council; suicide prevention expert not included


Doctors examining ChatGPT

OpenAI reveals which experts are steering ChatGPT mental health upgrades.

Ever since a lawsuit accused ChatGPT of becoming a teen’s “suicide coach,” OpenAI has been scrambling to make its chatbot safer. Today, the AI firm unveiled the experts it hired to help make ChatGPT a healthier option for all users.

In a press release, OpenAI explained its Expert Council on Wellness and AI started taking form after OpenAI began informally consulting with experts on parental controls earlier this year. Now it’s been formalized, bringing together eight “leading researchers and experts with decades of experience studying how technology affects our emotions, motivation, and mental health” to help steer ChatGPT updates.

One priority was finding “several council members with backgrounds in understanding how to build technology that supports healthy youth development,” OpenAI said, “because teens use ChatGPT differently than adults.”

That effort includes David Bickham, a research director at Boston Children’s Hospital, who has closely monitored how social media impacts kids’ mental health, and Mathilde Cerioli, the chief science officer at a nonprofit called Everyone.AI. Cerioli studies the opportunities and risks of children using AI, particularly focused on “how AI intersects with child cognitive and emotional development.”

These experts can seemingly help OpenAI better understand how safeguards can fail kids during extended conversations to ensure kids aren’t particularly vulnerable to so-called “AI psychosis,” a phenomenon where longer chats trigger mental health issues.

In January, Bickham noted in an American Psychological Association article on AI in education that “little kids learn from characters” already—as they do things like watch Sesame Street—and form “parasocial relationships” with those characters. AI chatbots could be the next frontier, possibly filling in teaching roles if we know more about the way kids bond with chatbots, Bickham suggested.

“How are kids forming a relationship with these AIs, what does that look like, and how might that impact the ability of AIs to teach?” Bickham posited.

Cerioli closely monitors AI’s influence in kids’ worlds. She suggested last month that kids who grow up using AI may risk having their brains rewired to “become unable to handle contradiction,” Le Monde reported, especially “if their earliest social interactions, at an age when their neural circuits are highly malleable, are conducted with endlessly accommodating entities.”

“Children are not mini-adults,” Cerioli said. “Their brains are very different, and the impact of AI is very different.”

Neither expert is focused on suicide prevention in kids. That may disappoint dozens of suicide prevention experts who last month pushed OpenAI to consult with experts deeply familiar with what “decades of research and lived experience” show about “what works in suicide prevention.”

OpenAI experts on suicide risks of chatbots

On a podcast last year, Cerioli said that child brain development is the area she’s most “passionate” about when asked about the earliest reported chatbot-linked teen suicide. She said it didn’t surprise her to see the news and noted that her research is focused less on figuring out “why that happened” and more on why it can happen because kids are “primed” to seek out “human connection.”

She noted that a troubled teen confessing suicidal ideation to a friend in the real world would more likely lead to an adult getting involved, whereas a chatbot would need specific safeguards built in to ensure parents are notified.

This seems in line with the steps OpenAI took to add parental controls, consulting with experts to design “the notification language for parents when a teen may be in distress,” the company’s press release said. However, on a resources page for parents, OpenAI has confirmed that parents won’t always be notified if a teen is linked to real-world resources after expressing “intent to self-harm,” which may alarm some critics who think the parental controls don’t go far enough.

Although OpenAI does not specify this in the press release, it appears that Munmun De Choudhury, a professor of interactive computing at Georgia Tech, could help evolve ChatGPT to recognize when kids are in danger and notify parents.

De Choudhury studies computational approaches to improve “the role of online technologies in shaping and improving mental health,” OpenAI noted.

In 2023, she conducted a study on the benefits and harms of large language models in digital mental health. The study was funded in part through a grant from the American Foundation for Suicide Prevention and noted that chatbots providing therapy services at that point could only detect “suicide behaviors” about half the time. The task appeared “unpredictable” and “random” to scholars, she reported.

It seems possible that OpenAI hopes the child experts can provide feedback on how ChatGPT is impacting kids’ brains while De Choudhury helps improve efforts to notify parents of troubling chat sessions.

More recently, De Choudhury seemed optimistic about potential AI mental health benefits, telling The New York Times in April that AI therapists can still have value even if companion bots do not provide the same benefits as real relationships.

“Human connection is valuable,” De Choudhury said. “But when people don’t have that, if they’re able to form parasocial connections with a machine, it can be better than not having any connection at all.”

First council meeting focused on AI benefits

Most of the other experts on OpenAI’s council have backgrounds similar to De Choudhury’s, exploring the intersection of mental health and technology. They include Tracy Dennis-Tiwary (a psychology professor and cofounder of Arcade Therapeutics), Sara Johansen (founder of Stanford University’s Digital Mental Health Clinic), David Mohr (director of Northwestern University’s Center for Behavioral Intervention Technologies), and Andrew K. Przybylski (a professor of human behavior and technology).

There’s also Robert K. Ross, a public health expert whom OpenAI previously tapped to serve as a nonprofit commission advisor.

OpenAI confirmed that there has been one meeting so far, which served to introduce the advisors to teams working to upgrade ChatGPT and Sora. Moving forward, the council will hold recurring meetings to explore sensitive topics that may require adding guardrails. Initially, though, OpenAI appears more interested in discussing the potential benefits to mental health that could be achieved if tools were tweaked to be more helpful.

“The council will also help us think about how ChatGPT can have a positive impact on people’s lives and contribute to their well-being,” OpenAI said. “Some of our initial discussions have focused on what constitutes well-being and the ways ChatGPT might empower people as they navigate all aspects of their life.”

Notably, Przybylski co-authored a study in 2023 providing data disputing that access to the Internet has negatively affected mental health broadly. He told Mashable that his research provided the “best evidence” so far “on the question of whether Internet access itself is associated with worse emotional and psychological experiences—and may provide a reality check in the ongoing debate on the matter.” He could possibly help OpenAI explore if the data supports perceptions that AI poses mental health risks, which are currently stoking a chatbot mental health panic in Congress.

Also appearing optimistic about companion bots in particular is Johansen. In a LinkedIn post earlier this year, she recommended that companies like OpenAI apply “insights from the impact of social media on youth mental health to emerging technologies like AI companions,” concluding that “AI has great potential to enhance mental health support, and it raises new challenges around privacy, trust, and quality.”

Other experts on the council have been critical of companion bots. OpenAI noted that Mohr specifically “studies how technology can help prevent and treat depression.”

Historically, Mohr has advocated for more digital tools to support mental health, suggesting in 2017 that apps could help support people who can’t get to the therapist’s office.

More recently, Mohr told The Wall Street Journal in 2024 that he had concerns about AI chatbots posing as therapists, though.

“I don’t think we’re near the point yet where there’s just going to be an AI who acts like a therapist,” Mohr said. “There’s still too many ways it can go off the rails.”

Similarly, although Dennis-Tiwary told Wired last month that she finds the term “AI psychosis” to be “very unhelpful” in most cases that aren’t “clinical,” she has warned that “above all, AI must support the bedrock of human well-being, social connection.”

“While acknowledging that there are potentially fruitful applications of social AI for neurodivergent individuals, the use of this highly unreliable and inaccurate technology among children and other vulnerable populations is of immense ethical concern,” Dennis-Tiwary wrote last year.

For OpenAI, the wellness council could help the company turn a corner as ChatGPT and Sora continue to be heavily scrutinized. The company also confirmed that it would continue consulting “the Global Physician Network, policymakers, and more, as we build advanced AI systems in ways that support people’s well-being.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

OpenAI unveils “wellness” council; suicide prevention expert not included Read More »

to-shield-kids,-california-hikes-fake-nude-fines-to-$250k-max

To shield kids, California hikes fake nude fines to $250K max

California is cracking down on AI technology deemed too harmful for kids, attacking two increasingly notorious child safety fronts: companion bots and deepfake pornography.

On Monday, Governor Gavin Newsom signed the first-ever US law regulating companion bots after several teen suicides sparked lawsuits.

Moving forward, California will require any companion bot platforms—including ChatGPT, Grok, Character.AI, and the like—to create and make public “protocols to identify and address users’ suicidal ideation or expressions of self-harm.”

They must also share “statistics regarding how often they provided users with crisis center prevention notifications to the Department of Public Health,” the governor’s office said. Those stats will also be posted on the platforms’ websites, potentially helping lawmakers and parents track any disturbing trends.

Further, companion bots will be banned from claiming that they’re therapists, and platforms must take extra steps to ensure child safety, including providing kids with break reminders and preventing kids from viewing sexually explicit images.

Additionally, Newsom strengthened the state’s penalties for those who create deepfake pornography, which could help shield young people, who are increasingly targeted with fake nudes, from cyber bullying.

Now any victims, including minors, can seek up to $250,000 in damages per deepfake from any third parties who knowingly distribute nonconsensual sexually explicit material created using AI tools. Previously, the state allowed victims to recover “statutory damages of not less than $1,500 but not more than $30,000, or $150,000 for a malicious violation.”

Both laws take effect January 1, 2026.

American families “are in a battle” with AI

The companion bot law’s sponsor, Democratic Senator Steve Padilla, said in a press release celebrating the signing that the California law demonstrates how to “put real protections into place” and said it “will become the bedrock for further regulation as this technology develops.”

To shield kids, California hikes fake nude fines to $250K max Read More »

4chan-fined-$26k-for-refusing-to-assess-risks-under-uk-online-safety-act

4chan fined $26K for refusing to assess risks under UK Online Safety Act

The risk assessments also seem to unconstitutionally compel speech, they argued, forcing them to share information and “potentially incriminate themselves on demand.” That conflicts with 4chan and Kiwi Farms’ Fourth Amendment rights, as well as “the right against self-incrimination and the due process clause of the Fifth Amendment of the US Constitution,” the suit says.

Additionally, “the First Amendment protects Plaintiffs’ right to permit anonymous use of their platforms,” 4chan and Kiwi Farms argued, opposing Ofcom’s requirements to verify ages of users. (This may be their weakest argument as the US increasingly moves to embrace age gates.)

4chan is hoping a US district court will intervene and ban enforcement of the OSA, arguing that the US must act now to protect all US companies. Failing to act now could be a slippery slope, as the UK is supposedly targeting “the most well-known, but small and, financially speaking, defenseless platforms” in the US before mounting attacks to censor “larger American companies,” 4chan and Kiwi Farms argued.

Ofcom has until November 25 to respond to the lawsuit and has maintained that the OSA is not a censorship law.

On Monday, Britain’s technology secretary, Liz Kendall, called OSA a “lifeline” meant to protect people across the UK “from the darkest corners of the Internet,” the Record reported.

“Services can no longer ignore illegal content, like encouraging self-harm or suicide, circulating online which can devastate young lives and leaves families shattered,” Kendall said. “This fine is a clear warning to those who fail to remove illegal content or protect children from harmful material.”

Whether 4chan and Kiwi Farms can win their fight to create a carveout in the OSA for American companies remains unclear, but the Federal Trade Commission agrees that the UK law is an overreach. In August, FTC Chair Andrew Ferguson warned US tech companies against complying with the OSA, claiming that censoring Americans to comply with UK law is a violation of the FTC Act, the Record reported.

“American consumers do not reasonably expect to be censored to appease a foreign power and may be deceived by such actions,” Ferguson told tech executives in a letter.

Another lawyer backing 4chan, Preston Byrne, seemed to echo Ferguson, telling the BBC, “American citizens do not surrender our constitutional rights just because Ofcom sends us an e-mail.”

4chan fined $26K for refusing to assess risks under UK Online Safety Act Read More »