Policy

trump-admin-pressured-facebook-into-removing-ice-tracking-group

Trump admin pressured Facebook into removing ICE-tracking group

Attorney General Pam Bondi today said that Facebook removed an ICE-tracking group after “outreach” from the Department of Justice. “Today following outreach from @thejusticedept, Facebook removed a large group page that was being used to dox and target @ICEgov agents in Chicago,” Bondi wrote in an X post.

Bondi alleged that a “wave of violence against ICE has been driven by online apps and social media campaigns designed to put ICE officers at risk just for doing their jobs.” She added that the DOJ “will continue engaging tech companies to eliminate platforms where radicals can incite imminent violence against federal law enforcement.”

When contacted by Ars, Facebook owner Meta said the group “was removed for violating our policies against coordinated harm.” Meta didn’t describe any specific violation but directed us to a policy against “coordinating harm and promoting crime,” which includes a prohibition against “outing the undercover status of law enforcement, military, or security personnel.”

The statement was sent by Francis Brennan, a former Trump campaign advisor who was hired by Meta in January.

The White House recently claimed there has been “a more than 1,000 percent increase in attacks on U.S. Immigration and Customs Enforcement (ICE) officers since January 21, 2025, compared to the same period last year.” Government officials haven’t offered proof of this claim, according to an NPR report that said “there is no public evidence that [attacks] have spiked as dramatically as the federal government has claimed.”

The Justice Department contacted Meta after Laura Loomer sought action against the “ICE Sighting-Chicagoland” group that had over 84,000 members on Facebook. “Fantastic news. DOJ source tells me they have seen my report and they have contacted Facebook and their executives at META to tell them they need to remove these ICE tracking pages from the platform,” Loomer wrote yesterday.

The ICE Sighting-Chicagoland group “has been increasingly used over the last five weeks of ‘Operation Midway Blitz,’ President Donald Trump’s intense deportation campaign, to warn neighbors that federal agents are near schools, grocery stores and other community staples so they can take steps to protect themselves,” the Chicago Sun-Times wrote today.

Trump slammed Biden for social media “censorship”

Trump and Republicans repeatedly criticized the Biden administration for pressuring social media companies into removing content. In a day-one executive order declaring an end to “federal censorship,” Trump said “the previous administration trampled free speech rights by censoring Americans’ speech on online platforms, often by exerting substantial coercive pressure on third parties, such as social media companies, to moderate, deplatform, or otherwise suppress speech that the Federal Government did not approve.”

Trump admin pressured Facebook into removing ICE-tracking group Read More »

openai-unveils-“wellness”-council;-suicide-prevention-expert-not-included

OpenAI unveils “wellness” council; suicide prevention expert not included


Doctors examining ChatGPT

OpenAI reveals which experts are steering ChatGPT mental health upgrades.

Ever since a lawsuit accused ChatGPT of becoming a teen’s “suicide coach,” OpenAI has been scrambling to make its chatbot safer. Today, the AI firm unveiled the experts it hired to help make ChatGPT a healthier option for all users.

In a press release, OpenAI explained its Expert Council on Wellness and AI started taking form after OpenAI began informally consulting with experts on parental controls earlier this year. Now it’s been formalized, bringing together eight “leading researchers and experts with decades of experience studying how technology affects our emotions, motivation, and mental health” to help steer ChatGPT updates.

One priority was finding “several council members with backgrounds in understanding how to build technology that supports healthy youth development,” OpenAI said, “because teens use ChatGPT differently than adults.”

That effort includes David Bickham, a research director at Boston Children’s Hospital, who has closely monitored how social media impacts kids’ mental health, and Mathilde Cerioli, the chief science officer at a nonprofit called Everyone.AI. Cerioli studies the opportunities and risks of children using AI, particularly focused on “how AI intersects with child cognitive and emotional development.”

These experts can seemingly help OpenAI better understand how safeguards can fail kids during extended conversations to ensure kids aren’t particularly vulnerable to so-called “AI psychosis,” a phenomenon where longer chats trigger mental health issues.

In January, Bickham noted in an American Psychological Association article on AI in education that “little kids learn from characters” already—as they do things like watch Sesame Street—and form “parasocial relationships” with those characters. AI chatbots could be the next frontier, possibly filling in teaching roles if we know more about the way kids bond with chatbots, Bickham suggested.

“How are kids forming a relationship with these AIs, what does that look like, and how might that impact the ability of AIs to teach?” Bickham posited.

Cerioli closely monitors AI’s influence in kids’ worlds. She suggested last month that kids who grow up using AI may risk having their brains rewired to “become unable to handle contradiction,” Le Monde reported, especially “if their earliest social interactions, at an age when their neural circuits are highly malleable, are conducted with endlessly accommodating entities.”

“Children are not mini-adults,” Cerioli said. “Their brains are very different, and the impact of AI is very different.”

Neither expert is focused on suicide prevention in kids. That may disappoint dozens of suicide prevention experts who last month pushed OpenAI to consult with experts deeply familiar with what “decades of research and lived experience” show about “what works in suicide prevention.”

OpenAI experts on suicide risks of chatbots

On a podcast last year, Cerioli said that child brain development is the area she’s most “passionate” about when asked about the earliest reported chatbot-linked teen suicide. She said it didn’t surprise her to see the news and noted that her research is focused less on figuring out “why that happened” and more on why it can happen because kids are “primed” to seek out “human connection.”

She noted that a troubled teen confessing suicidal ideation to a friend in the real world would more likely lead to an adult getting involved, whereas a chatbot would need specific safeguards built in to ensure parents are notified.

This seems in line with the steps OpenAI took to add parental controls, consulting with experts to design “the notification language for parents when a teen may be in distress,” the company’s press release said. However, on a resources page for parents, OpenAI has confirmed that parents won’t always be notified if a teen is linked to real-world resources after expressing “intent to self-harm,” which may alarm some critics who think the parental controls don’t go far enough.

Although OpenAI does not specify this in the press release, it appears that Munmun De Choudhury, a professor of interactive computing at Georgia Tech, could help evolve ChatGPT to recognize when kids are in danger and notify parents.

De Choudhury studies computational approaches to improve “the role of online technologies in shaping and improving mental health,” OpenAI noted.

In 2023, she conducted a study on the benefits and harms of large language models in digital mental health. The study was funded in part through a grant from the American Foundation for Suicide Prevention and noted that chatbots providing therapy services at that point could only detect “suicide behaviors” about half the time. The task appeared “unpredictable” and “random” to scholars, she reported.

It seems possible that OpenAI hopes the child experts can provide feedback on how ChatGPT is impacting kids’ brains while De Choudhury helps improve efforts to notify parents of troubling chat sessions.

More recently, De Choudhury seemed optimistic about potential AI mental health benefits, telling The New York Times in April that AI therapists can still have value even if companion bots do not provide the same benefits as real relationships.

“Human connection is valuable,” De Choudhury said. “But when people don’t have that, if they’re able to form parasocial connections with a machine, it can be better than not having any connection at all.”

First council meeting focused on AI benefits

Most of the other experts on OpenAI’s council have backgrounds similar to De Choudhury’s, exploring the intersection of mental health and technology. They include Tracy Dennis-Tiwary (a psychology professor and cofounder of Arcade Therapeutics), Sara Johansen (founder of Stanford University’s Digital Mental Health Clinic), David Mohr (director of Northwestern University’s Center for Behavioral Intervention Technologies), and Andrew K. Przybylski (a professor of human behavior and technology).

There’s also Robert K. Ross, a public health expert whom OpenAI previously tapped to serve as a nonprofit commission advisor.

OpenAI confirmed that there has been one meeting so far, which served to introduce the advisors to teams working to upgrade ChatGPT and Sora. Moving forward, the council will hold recurring meetings to explore sensitive topics that may require adding guardrails. Initially, though, OpenAI appears more interested in discussing the potential benefits to mental health that could be achieved if tools were tweaked to be more helpful.

“The council will also help us think about how ChatGPT can have a positive impact on people’s lives and contribute to their well-being,” OpenAI said. “Some of our initial discussions have focused on what constitutes well-being and the ways ChatGPT might empower people as they navigate all aspects of their life.”

Notably, Przybylski co-authored a study in 2023 providing data disputing that access to the Internet has negatively affected mental health broadly. He told Mashable that his research provided the “best evidence” so far “on the question of whether Internet access itself is associated with worse emotional and psychological experiences—and may provide a reality check in the ongoing debate on the matter.” He could possibly help OpenAI explore if the data supports perceptions that AI poses mental health risks, which are currently stoking a chatbot mental health panic in Congress.

Also appearing optimistic about companion bots in particular is Johansen. In a LinkedIn post earlier this year, she recommended that companies like OpenAI apply “insights from the impact of social media on youth mental health to emerging technologies like AI companions,” concluding that “AI has great potential to enhance mental health support, and it raises new challenges around privacy, trust, and quality.”

Other experts on the council have been critical of companion bots. OpenAI noted that Mohr specifically “studies how technology can help prevent and treat depression.”

Historically, Mohr has advocated for more digital tools to support mental health, suggesting in 2017 that apps could help support people who can’t get to the therapist’s office.

More recently, Mohr told The Wall Street Journal in 2024 that he had concerns about AI chatbots posing as therapists, though.

“I don’t think we’re near the point yet where there’s just going to be an AI who acts like a therapist,” Mohr said. “There’s still too many ways it can go off the rails.”

Similarly, although Dennis-Tiwary told Wired last month that she finds the term “AI psychosis” to be “very unhelpful” in most cases that aren’t “clinical,” she has warned that “above all, AI must support the bedrock of human well-being, social connection.”

“While acknowledging that there are potentially fruitful applications of social AI for neurodivergent individuals, the use of this highly unreliable and inaccurate technology among children and other vulnerable populations is of immense ethical concern,” Dennis-Tiwary wrote last year.

For OpenAI, the wellness council could help the company turn a corner as ChatGPT and Sora continue to be heavily scrutinized. The company also confirmed that it would continue consulting “the Global Physician Network, policymakers, and more, as we build advanced AI systems in ways that support people’s well-being.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

OpenAI unveils “wellness” council; suicide prevention expert not included Read More »

to-shield-kids,-california-hikes-fake-nude-fines-to-$250k-max

To shield kids, California hikes fake nude fines to $250K max

California is cracking down on AI technology deemed too harmful for kids, attacking two increasingly notorious child safety fronts: companion bots and deepfake pornography.

On Monday, Governor Gavin Newsom signed the first-ever US law regulating companion bots after several teen suicides sparked lawsuits.

Moving forward, California will require any companion bot platforms—including ChatGPT, Grok, Character.AI, and the like—to create and make public “protocols to identify and address users’ suicidal ideation or expressions of self-harm.”

They must also share “statistics regarding how often they provided users with crisis center prevention notifications to the Department of Public Health,” the governor’s office said. Those stats will also be posted on the platforms’ websites, potentially helping lawmakers and parents track any disturbing trends.

Further, companion bots will be banned from claiming that they’re therapists, and platforms must take extra steps to ensure child safety, including providing kids with break reminders and preventing kids from viewing sexually explicit images.

Additionally, Newsom strengthened the state’s penalties for those who create deepfake pornography, which could help shield young people, who are increasingly targeted with fake nudes, from cyber bullying.

Now any victims, including minors, can seek up to $250,000 in damages per deepfake from any third parties who knowingly distribute nonconsensual sexually explicit material created using AI tools. Previously, the state allowed victims to recover “statutory damages of not less than $1,500 but not more than $30,000, or $150,000 for a malicious violation.”

Both laws take effect January 1, 2026.

American families “are in a battle” with AI

The companion bot law’s sponsor, Democratic Senator Steve Padilla, said in a press release celebrating the signing that the California law demonstrates how to “put real protections into place” and said it “will become the bedrock for further regulation as this technology develops.”

To shield kids, California hikes fake nude fines to $250K max Read More »

4chan-fined-$26k-for-refusing-to-assess-risks-under-uk-online-safety-act

4chan fined $26K for refusing to assess risks under UK Online Safety Act

The risk assessments also seem to unconstitutionally compel speech, they argued, forcing them to share information and “potentially incriminate themselves on demand.” That conflicts with 4chan and Kiwi Farms’ Fourth Amendment rights, as well as “the right against self-incrimination and the due process clause of the Fifth Amendment of the US Constitution,” the suit says.

Additionally, “the First Amendment protects Plaintiffs’ right to permit anonymous use of their platforms,” 4chan and Kiwi Farms argued, opposing Ofcom’s requirements to verify ages of users. (This may be their weakest argument as the US increasingly moves to embrace age gates.)

4chan is hoping a US district court will intervene and ban enforcement of the OSA, arguing that the US must act now to protect all US companies. Failing to act now could be a slippery slope, as the UK is supposedly targeting “the most well-known, but small and, financially speaking, defenseless platforms” in the US before mounting attacks to censor “larger American companies,” 4chan and Kiwi Farms argued.

Ofcom has until November 25 to respond to the lawsuit and has maintained that the OSA is not a censorship law.

On Monday, Britain’s technology secretary, Liz Kendall, called OSA a “lifeline” meant to protect people across the UK “from the darkest corners of the Internet,” the Record reported.

“Services can no longer ignore illegal content, like encouraging self-harm or suicide, circulating online which can devastate young lives and leaves families shattered,” Kendall said. “This fine is a clear warning to those who fail to remove illegal content or protect children from harmful material.”

Whether 4chan and Kiwi Farms can win their fight to create a carveout in the OSA for American companies remains unclear, but the Federal Trade Commission agrees that the UK law is an overreach. In August, FTC Chair Andrew Ferguson warned US tech companies against complying with the OSA, claiming that censoring Americans to comply with UK law is a violation of the FTC Act, the Record reported.

“American consumers do not reasonably expect to be censored to appease a foreign power and may be deceived by such actions,” Ferguson told tech executives in a letter.

Another lawyer backing 4chan, Preston Byrne, seemed to echo Ferguson, telling the BBC, “American citizens do not surrender our constitutional rights just because Ofcom sends us an e-mail.”

4chan fined $26K for refusing to assess risks under UK Online Safety Act Read More »

openai-will-stop-saving-most-chatgpt-users’-deleted-chats

OpenAI will stop saving most ChatGPT users’ deleted chats

Moving forward, all of the deleted and temporary chats that were previously saved under the preservation order will continue to be accessible to news plaintiffs, who are looking for examples of outputs infringing their articles or attributing misinformation to their publications.

Additionally, OpenAI will continue monitoring certain ChatGPT accounts, saving deleted and temporary chats of any users whose domains have been flagged by news organizations since they began searching through the data. If news plaintiffs flag additional domains during future meetings with OpenAI, more accounts could be roped in.

Ars could not immediately reach OpenAI or the Times’ legal team for comment.

The dispute with news plaintiffs continues to heat up beyond the battle over user logs, most recently with co-defendant Microsoft pushing to keep its AI companion Copilot out of the litigation.

The stakes remain high for both sides. News organizations have alleged that ChatGPT and other allegedly copyright-infringing tools threaten to replace them in their market while potentially damaging their reputations by attributing false information to them.

OpenAI may be increasingly pressured to settle the lawsuit, and not by news organizations but by insurance companies that won’t provide comprehensive coverage for their AI products with multiple potentially multibillion-dollar lawsuits pending.

OpenAI will stop saving most ChatGPT users’ deleted chats Read More »

boring-company-cited-for-almost-800-environmental-violations-in-las-vegas

Boring Company cited for almost 800 environmental violations in Las Vegas

Workers have complained of chemical burns from the waste material generated by the tunneling process, and firefighters must decontaminate their equipment after conducting rescues from the project sites. The company was fined more than $112,000 by Nevada’s Occupational Safety and Health Administration in late 2023 after workers complained of “ankle-deep” water in the tunnels, muck spills, and burns. The Boring Co. has contested the violations. Just last month, a construction worker suffered a “crush injury” after being pinned between two 4,000-foot pipes, according to police records. Firefighters used a crane to extract him from the tunnel opening.

After ProPublica and City Cast Las Vegas published their January story, both the CEO and the chairman of the LVCVA board criticized the reporting, arguing the project is well-regulated. As an example, LVCVA CEO Steve Hill cited the delayed opening of a Loop station by local officials who were concerned that fire safety requirements weren’t adequate. Board chair Jim Gibson, who is also a Clark County commissioner, agreed the project is appropriately regulated.

“We wouldn’t have given approvals if we determined things weren’t the way they ought to be and what it needs to be for public safety reasons,” Gibson said, according to the Las Vegas Review Journal. “Our sense is we’ve done what we need to do to protect the public.”

Asked for a response to the new proposed fines, an LVCVA spokesperson said, “We won’t be participating in this story.”

The repeated allegations that the company is violating regulations—including the bespoke regulatory arrangement agreed to by the company—indicates that officials aren’t keeping the public safe, said Ben Leffel, an assistant public policy professor at the University of Nevada, Las Vegas.

“Not if they’re recommitting almost the exact violation,” Leffel said.

Leffel questioned whether a $250,000 penalty would be significant enough to change operations at The Boring Co., which was valued at $7 billion in 2023. Studies show that fines that don’t put a significant dent in a company’s profit don’t deter companies from future violations, Leffel said.

A state spokesperson disagreed that regulators aren’t keeping the public safe and said the agency believes its penalties will deter “future non-compliance.”

“NDEP is actively monitoring and inspecting the projects,” the spokesperson said.

This story originally appeared on ProPublica.

Boring Company cited for almost 800 environmental violations in Las Vegas Read More »

“extremely-angry”-trump-threatens-“massive”-tariff-on-all-chinese-exports

“Extremely angry” Trump threatens “massive” tariff on all Chinese exports

The chairman of the House of Representatives’ Select Committee on the Chinese Communist Party (CCP), John Moolenaar (R-Mich.), issued a statement, suggesting that, unlike Trump, he’d seen China’s rare earths move coming. He pushed Trump to interpret China’s export controls as “an economic declaration of war against the United States and a slap in the face to President Trump.”

“China has fired a loaded gun at the American economy, seeking to cut off critical minerals used to make the semiconductors that power the American military, economy, and devices we use every day including cars, phones, computers, and TVs,” Moolenaar said. “Every American will be negatively affected by China’s action, and that’s why we must address America’s vulnerabilities and build our own leverage against China.”

To strike back forcefully, Moolenaar suggested passing a law he sponsored that he said would “end preferential trade treatment for China, build a resilient resource reserve of critical minerals, secure American research and campuses from Chinese influence, and strangle China’s technology sector with export controls instead of selling it advanced chips.”

Moolenaar also emphasized steps he recommended back in September that he claimed Trump could take to “create real leverage with China” in the face of its stranglehold on rare earths.

Those included “restricting or suspending Chinese airline landing rights in the US,” “reviewing export control policies governing the sale of commercial aircraft, parts, and maintenance services to China,” and “restricting outbound investment in China’s aviation sector in coordination with key allies.”

“These steps would send a clear message to Beijing that it cannot choke off critical supplies to our defense industries without consequences to its own strategic sectors,” Moolenaar wrote in his September letter to Trump. “By acting together, the US and its allies can strengthen our resilience, reinforce solidarity, and create real leverage with China.”

“Extremely angry” Trump threatens “massive” tariff on all Chinese exports Read More »

apple-and-google-reluctantly-comply-with-texas-age-verification-law

Apple and Google reluctantly comply with Texas age verification law

Apple yesterday announced a plan to comply with a Texas age verification law and warned that changes required by the law will reduce privacy for app users.

“Beginning January 1, 2026, a new state law in Texas—SB2420—introduces age assurance requirements for app marketplaces and developers,” Apple said yesterday in a post for developers. “While we share the goal of strengthening kids’ online safety, we are concerned that SB2420 impacts the privacy of users by requiring the collection of sensitive, personally identifiable information to download any app, even if a user simply wants to check the weather or sports scores.”

The Texas App Store Accountability Act requires app stores to verify users’ ages and imposes restrictions on those under 18. Apple said that developers will have “to adopt new capabilities and modify behavior within their apps to meet their obligations under the law.”

Apple’s post noted that similar laws will take effect later in 2026 in Utah and Louisiana. Google also recently announced plans for complying with the three state laws and said the new requirements reduce user privacy.

“While we have user privacy and trust concerns with these new verification laws, Google Play is designing APIs, systems, and tools to help you meet your obligations,” Google told developers in an undated post.

The Utah law is scheduled to take effect May 7, 2026, while the Louisiana law will take effect July 1, 2026. The Texas, Utah, and Louisiana “laws impose significant new requirements on many apps that may need to provide age appropriate experiences to users in these states,” Google said. “These requirements include ingesting users’ age ranges and parental approval status for significant changes from app stores and notifying app stores of significant changes.”

New features for Texas

Apple and Google both announced new features to help developers comply.

“Once this law goes into effect, users located in Texas who create a new Apple Account will be required to confirm whether they are 18 years or older,” Apple said. “All new Apple Accounts for users under the age of 18 will be required to join a Family Sharing group, and parents or guardians will need to provide consent for all App Store downloads, app purchases, and transactions using Apple’s In-App Purchase system by the minor.”

Apple and Google reluctantly comply with Texas age verification law Read More »

musk’s-x-posts-on-ketamine,-putin-spur-release-of-his-security-clearances

Musk’s X posts on ketamine, Putin spur release of his security clearances

“A disclosure, even with redactions, will reveal whether a security clearance was granted with or without conditions or a waiver,” DCSA argued.

Ultimately, DCSA failed to prove that Musk risked “embarrassment or humiliation” not only if the public learned what specific conditions or waivers applied to Musk’s clearances but also if there were any conditions or waivers at all, Cote wrote.

Three cases that DCSA cited to support this position—including a case where victims of Jeffrey Epstein’s trafficking scheme had a substantial privacy interest in non-disclosure of detailed records—do not support the government’s logic, Cote said. The judge explained that the disclosures would not have affected the privacy rights of any third parties, emphasizing that “Musk’s diminished privacy interest is underscored by the limited information plaintiffs sought in their FOIA request.”

Musk’s X posts discussing his occasional use of prescription ketamine and his disclosure on a podcast that smoking marijuana prompted NASA requirements for random drug testing, Cote wrote, “only enhance” the public’s interest in how Musk’s security clearances were vetted. Additionally, Musk has posted about speaking with Vladimir Putin, prompting substantial public interest in how his foreign contacts may or may not restrict his security clearances. More than 2 million people viewed Musk’s X posts on these subjects, the judge wrote, noting that:

It is undisputed that drug use and foreign contacts are two factors DCSA considers when determining whether to impose conditions or waivers on a security clearance grant. DCSA fails to explain why, given Musk’s own, extensive disclosures, the mere disclosure that a condition or waiver exists (or that no condition or waiver exists) would subject him to ’embarrassment or humiliation.’

Rather, for the public, “the list of Musk’s security clearances, including any conditions or waivers, could provide meaningful insight into DCSA’s performance of that duty and responses to Musk’s admissions, if any,” Cote wrote.

In a footnote, Cote said that this substantial public interest existed before Musk became a special government employee, ruling that DCSA was wrong to block the disclosures seeking information on Musk as a major government contractor. Her ruling likely paves the way for the NYT or other news organizations to submit FOIA requests for a list of Musk’s clearances while he helmed DOGE.

It’s not immediately clear when the NYT will receive the list they requested in 2024, but the government has until October 17 to request redactions before it’s publicized.

“The Times brought this case because the public has a right to know about how the government conducts itself,” Charlie Stadtlander, an NYT spokesperson, said. “The decision reaffirms that fundamental principle and we look forward to receiving the document at issue.”

Musk’s X posts on ketamine, Putin spur release of his security clearances Read More »

isps-created-so-many-fees-that-fcc-will-kill-requirement-to-list-them-all

ISPs created so many fees that FCC will kill requirement to list them all

The FCC was required by Congress to implement broadband-label rules, but the Carr FCC says the law doesn’t “require itemizing pass through fees that vary by location.”

“Commenters state that itemizing such fees requires providers to produce multiple labels for identical services,” the FCC plan says, with a footnote to comments from industry groups such as USTelecom and NCTA. “We believe, consistent with commenters in the Delete, Delete, Delete proceeding, that itemizing can lead to a proliferation of labels and of labels so lengthy that the fees overwhelm other important elements of the label.”

In a blog post Monday, Carr said his plan is part of a “focus on consumer protection.” He said the FCC “will vote on a notice that would reexamine broadband nutrition labels so that we can separate the wheat from the chaff. We want consumers to get quick and easy access to the information they want and need to compare broadband plans (as Congress has provided) without imposing unnecessary burdens.”

ISPs would still be required to provide the labels, but with less information. The NPRM said that eliminating the rules targeted for deletion will not “change the core label requirements to display a broadband consumer label containing critical information about the provider’s service offerings, including information about pricing, introductory rates, data allowances, and performance metrics.”

ISPs said listing fees was too hard

In 2023, five major trade groups representing US broadband providers petitioned the FCC to scrap the list-every-fee requirement before it took effect. Comcast told the commission that the rule “impose[s] significant administrative burdens and unnecessary complexity in complying with the broadband label requirements.”

Rejecting the industry complaints, then-Chairwoman Jessica Rosenworcel said that “every consumer needs transparent information when making decisions about what Internet service offering makes the most sense for their family or household. No one wants to be hit with charges they didn’t ask for or they did not expect.”

The Rosenworcel FCC’s order denying the industry petition pointedly said that ISPs could simplify pricing instead of charging loads of fees. “ISPs could alternatively roll such discretionary fees into the base monthly price, thereby eliminating the need to itemize them on the label,” the order said.

ISPs created so many fees that FCC will kill requirement to list them all Read More »

vandals-deface-ads-for-ai-necklaces-that-listen-to-all-your-conversations

Vandals deface ads for AI necklaces that listen to all your conversations

In addition to backlash over feared surveillance capitalism, critics have accused Schiffman of taking advantage of the loneliness epidemic. Conducting a survey last year, researchers with Harvard Graduate School of Education’s Making Caring Common found that people between “30-44 years of age were the loneliest group.” Overall, 73 percent of those surveyed “selected technology as contributing to loneliness in the country.”

But Schiffman rejects these criticisms, telling the NYT that his AI Friend pendant is intended to supplement human friends, not replace them, supposedly helping to raise the “average emotional intelligence” of users “significantly.”

“I don’t view this as dystopian,” Schiffman said, suggesting that “the AI friend is a new category of companionship, one that will coexist alongside traditional friends rather than replace them,” the NYT reported. “We have a cat and a dog and a child and an adult in the same room,” the Friend founder said. “Why not an AI?”

The MTA has not commented on the controversy, but Victoria Mottesheard—a vice president at Outfront Media, which manages MTA advertising—told the NYT that the Friend campaign blew up because AI “is the conversation of 2025.”

Website lets anyone deface Friend ads

So far, the Friend ads have not yielded significant sales, Schiffman confirmed, telling the NYT that only 3,100 have sold. He expects that society isn’t ready for AI companions to be promoted at such a large scale and that his ad campaign will help normalize AI friends.

In the meantime, critics have rushed to attack Friend on social media, inspiring a website where anyone can vandalize a Friend ad and share it online. That website has received close to 6,000 submissions so far, its creator, Marc Mueller, told the NYT, and visitors can take a tour of these submissions by choosing “ride train to see more” after creating their own vandalized version.

For visitors to Mueller’s site, riding the train displays a carousel documenting backlash to Friend, as well as “performance art” by visitors poking fun at the ads in less serious ways. One example showed a vandalized ad changing “Friend” to “Fries,” with a crude illustration of McDonald’s French fries, while another transformed the ad into a campaign for “fried chicken.”

Others were seemingly more serious about turning the ad into a warning. One vandal drew a bunch of arrows pointing to the “end” in Friend while turning the pendant into a cry-face emoji, seemingly drawing attention to research on the mental health risks of relying on AI companions—including the alleged suicide risks of products like Character.AI and ChatGPT, which have spawned lawsuits and prompted a Senate hearing.

Vandals deface ads for AI necklaces that listen to all your conversations Read More »

ted-cruz-doesn’t-seem-to-understand-wikipedia,-lawyer-for-wikimedia-says

Ted Cruz doesn’t seem to understand Wikipedia, lawyer for Wikimedia says


A Wikipedia primer for Ted Cruz

Wikipedia host’s lawyer wants to help Ted Cruz understand how the platform works.

Senator Ted Cruz (R-Texas) uses his phone during a joint meeting of Congress on May 17, 2022. Credit: Getty Images | Bloomberg

The letter from Sen. Ted Cruz (R-Texas) accusing Wikipedia of left-wing bias seems to be based on fundamental misunderstandings of how the platform works, according to a lawyer for the nonprofit foundation that operates the online encyclopedia.

“The foundation is very much taking the approach that Wikipedia is actually pretty great and a lot of what’s in this letter is actually misunderstandings,” Jacob Rogers, associate general counsel at the Wikimedia Foundation, told Ars in an interview. “And so we are more than happy, despite the pressure that comes from these things, to help people better understand how Wikipedia works.”

Cruz’s letter to Wikimedia Foundation CEO Maryana Iskander expressed concern “about ideological bias on the Wikipedia platform and at the Wikimedia Foundation.” Cruz alleged that Wikipedia articles “often reflect a left-wing bias.” He asked the foundation for “documents sufficient to show what supervision, oversight, or influence, if any, the Wikimedia Foundation has over the editing community,” and “documents sufficient to show how the Wikimedia Foundation addresses political or ideological bias.”

As many people know, Wikipedia is edited by volunteers through a collaborative process.

“We’re not deciding what the editorial policies are for what is on Wikipedia,” Rogers said, describing the Wikimedia Foundation’s hands-off approach. “All of that, both the writing of the content and the determining of the editorial policies, is done through the volunteer editors” through “public conversation and discussion and trying to come to a consensus. They make all of that visible in various ways to the reader. So you go and you read a Wikipedia article, you can see what the sources are, what someone has written, you can follow the links yourselves.”

“They’re worried about something that is just not present at all”

Cruz’s letter raised concerns about “the influence of large donors on Wikipedia’s content creation or editing practices.” But Rogers said that “people who donate to Wikipedia don’t have any influence over content and we don’t even have that many large donors to begin with. It is primarily funded by people donating through the website fundraisers, so I think they’re worried about something that is just not present at all.”

Anyone unhappy with Wikipedia content can participate in the writing and editing, he said. “It’s still open for everybody to participate. If someone doesn’t like what it says, they can go on and say, ‘Hey, I don’t like the sources that are being used, or I think a different source should be used that isn’t there,'” Rogers said. “Other people might disagree with them, but they can have that conversation and try to figure it out and make it better.”

Rogers said that some people wrongly assume there is central control over Wikipedia editing. “I feel like people are asking questions assuming that there is something more central that is controlling all of this that doesn’t actually exist,” he said. “I would love to see it a little better understood about how this sort of public model works and the fact that people can come judge it for themselves and participate for themselves. And maybe that will have it sort of die down as a source of government pressure, government questioning, and go onto something else.”

Cruz’s letter accused Wikipedia of pushing antisemitic narratives. He described the Wikimedia Foundation as “intervening in editorial decisions” in an apparent reference to an incident in which the platform’s Arbitration Committee responded to editing conflicts on the Israeli–Palestinian conflict by banning eight editors.

“The Wikimedia Foundation has said it is taking steps to combat this editing campaign, raising further questions about the extent to which it is intervening in editorial decisions and to what end,” Cruz wrote.

Explaining the Arbitration Committee

The Arbitration Committee for the English-language edition of Wikipedia consists of volunteers who “are elected by the rest of the English Wikipedia editors,” Rogers said. The group is a “dispute resolution body when people can’t otherwise resolve their disputes.” The committee made “a ruling on Israel/Palestine because it is such a controversial subject and it’s not just banning eight editors, it’s also how contributions are made in that topic area and sort of limiting it to more experienced editors,” he said.

The members of the committee “do not control content,” Rogers said. “The arbitration committee is not a content dispute body. They’re like a behavior conduct dispute body, but they try to set things up so that fights will not break out subsequently.”

As with other topics, people can participate if they believe articles are antisemitic. “That is sort of squarely in the user editorial processes,” Rogers said. “If someone thinks that something on Wikipedia is antisemitic, they should change it or propose to people working on it that they change it or change sources. I do think the editorial community, especially on topics related to antisemitism and related to Israel/Palestine, has a lot of various safeguards in place. That particular topic is probably the most controversial topic in the world, but there’s still a lot of editorial safeguards in place where people can discuss things. They can get help with dispute resolution from bringing in other editors if there’s a behavioral problem, they can ask for help from Wikipedia administrators, and all the way up to the English Wikipedia arbitration committee.”

Cruz’s letter called out Wikipedia’s goal of “knowledge equity,” and accused the foundation of favoring “ideology over neutrality.” Cruz also pointed to a Daily Caller report that the foundation donated “to activist groups seeking to bring the online encyclopedia more in line with traditionally left-of-center points of view.”

Rogers countered that “the theory behind that is sort of misunderstood by the letter where it’s not about equity like the DEI equity, it is about the mission of the Wikimedia Foundation to have the world’s knowledge, to prepare educational content and to have all the different knowledge in the world to the extent possible.” In topic areas where people with expertise haven’t contributed much to Wikipedia, “we are looking to write grants to help fill in those gaps in knowledge and have a more broad range of information and sources,” he said.

What happens next

Rogers is familiar with the workings of Senate investigations from personal experience. He joined the Wikimedia Foundation in 2014 after working for the Senate’s Permanent Subcommittee on Investigations under the late Sen. Carl Levin (D-Mich.).

While Cruz demanded a trove of documents, Rogers said the foundation doesn’t necessarily have to provide them. A subpoena could be issued to Wikimedia, but that hasn’t happened.

“What Cruz has sent us is just a letter,” Rogers said. “There is no legal proceeding whatsoever. There’s no formal authority behind this letter. It’s just a letter from a person in the legislative branch who cares about the topic, so there is nothing compelling us to give him anything. I think we are probably going to answer the letter, but there’s no sort of legal requirement to actually fully provide everything that answers every question.” Assuming it responds, the foundation would try to answer Cruz’s questions “to the extent that we can, and without violating any of our company policies,” and without giving out nonpublic information, he said.

A letter responding to Cruz wouldn’t necessarily be made public. In April, the foundation received a letter from 23 lawmakers about alleged antisemitism and anti-Israel bias. The foundation’s response to that letter is not public.

Cruz is seeking changes at Wikipedia just a couple weeks after criticizing Federal Communications Commission Chairman Brendan Carr for threatening ABC with station license revocations over political content on Jimmy Kimmel’s show. While the pressure tactics used by Cruz and Carr have similarities, Rogers said there are also key differences between the legislative and executive branches.

“Congressional committees, they are investigating something to determine what laws to make, and so they have a little bit more freedom to just look into the state of the world to try to decide what laws they want to write or what laws they want to change,” he said. “That doesn’t mean that they can’t use their authority in a way that might ultimately go down a path of violating the First Amendment or something like that. They have a little bit more runway to get there versus an executive branch agency which, if it is pressuring someone, it is doing so for a very immediate decision usually.”

What does Cruz want? It’s unclear

Rogers said it’s not clear whether Cruz’s inquiry is the first step toward changing the law. “The questions in the letter don’t really say why they want the information they want other than the sort of immediacy of their concerns,” he said.

Cruz chairs the Senate Commerce Committee, which “does have lawmaking authority over the Internet writ large,” Rogers said. “So they may be thinking about changes to the law.”

One potential target is Section 230 of the Communications Decency Act, which gives online platforms immunity from lawsuits over how they moderate user-submitted content.

“From the perspective of the foundation, we’re staunch defenders of Section 230,” Rogers said, adding that Wikimedia supports “broad laws around intellectual property and privacy and other things that allow a large amount of material to be appropriately in the public domain, to be written about on a free encyclopedia like Wikipedia, but that also protect the privacy of editors who are contributing to Wikipedia.”

Photo of Jon Brodkin

Jon is a Senior IT Reporter for Ars Technica. He covers the telecom industry, Federal Communications Commission rulemakings, broadband consumer affairs, court cases, and government regulation of the tech industry.

Ted Cruz doesn’t seem to understand Wikipedia, lawyer for Wikimedia says Read More »