Policy

tiktok-users-“absolutely-justified”-for-fearing-maga-makeover,-experts-say

TikTok users “absolutely justified” for fearing MAGA makeover, experts say


Spectacular coincidence or obvious censorship?

TikTok’s tech issues abound as censorship fears drive users to delete app.

Credit: Aurich Lawson | Getty Images

TikTok wants users to believe that errors blocking uploads of anti-ICE videos or direct messages mentioning Jeffrey Epstein are due to technical errors—not the platform seemingly shifting to censor content critical of Donald Trump after he hand-picked the US owners who took over the app last week.

However, experts say that TikTok users’ censorship fears are justified, whether the bugs are to blame or not.

Ioana Literat, an associate professor of technology, media, and learning at Teachers College, Columbia University, has studied TikTok’s politics since the app first shot to popularity in the US in 2018. She told Ars that “users’ fears are absolutely justified” and explained why the “bugs” explanation is “insufficient.”

“Even if these are technical glitches, the pattern of what’s being suppressed reveals something significant,” Literat told Ars. “When your ‘bug’ consistently affects anti-Trump content, Epstein references, and anti-ICE videos, you’re looking at either spectacular coincidence or systems that have been designed—whether intentionally or through embedded biases—to flag and suppress specific political content.”

TikTok users are savvy, Literat noted, and what’s being cast as “paranoia” about the app’s bugs actually stems from their “digital literacy,” she suggested.

“They’ve watched Instagram suppress Palestine content, they’ve seen Twitter’s transformation under Musk, they’ve experienced shadow-banning and algorithmic suppression, including on TikTok prior to this,” Literat said. “So, their pattern recognition isn’t paranoia, but rather digital literacy.”

Casey Fiesler, an associate professor of technology ethics and internet law at the University of Colorado, Boulder, agreed that TikTok’s “bugs” explanation wasn’t enough to address users’ fears. She told CNN that TikTok risks losing users’ trust the longer that errors damage the perception of the app.

“Even if this isn’t purposeful censorship, does it matter? In terms of perception and trust, maybe,” Fiesler told CNN.

Some users are already choosing to leave TikTok. A quick glance at the TikTok subreddit shows many users grieving while vowing to delete the app, Literat pointed out, though some are reportedly struggling to delete accounts due to technical issues. Even with some users blocked from abandoning their accounts, however, “the daily average of TikTok uninstalls are up nearly 150 percent in the last five days compared to the last three months,” data analysis firm Sensor Tower told CNN.

A TikTok USDS spokesperson told Ars that US owners have not yet made any changes to the algorithm or content moderation policies. So far, the only changes have been to the US app’s terms of use and privacy policy, which impacted what location data is collected, how ads are targeted, and how AI interactions are monitored.

For TikTok, the top priority appears to be fixing the bugs, which were attributed to a power outage at a US data center. A TikTok USDS spokesperson told NPR that TikTok is also investigating the issue where some users can’t talk about Epstein in DMs.

“We don’t have rules against sharing the name ‘Epstein’ in direct messages and are investigating why some users are experiencing issues,” TikTok’s spokesperson said.

TikTok’s response came after California governor Gavin Newsom declared on X that “it’s time to investigate” TikTok.

“I am launching a review into whether TikTok is violating state law by censoring Trump-critical content,” Newsom said. His post quote-tweeted an X user who shared a screenshot of the error message TikTok displayed when some users referenced Epstein and joked, “so the agreement for TikTok to sell its US business to GOP-backed investors was finalized a few days ago,” and “now you can’t mention Epstein lmao.”

As of Tuesday afternoon, the results of TikTok’s investigation into the “Epstein” issue were not publicly available, but TikTok may post an update here as technical issues are resolved.

“We’ve made significant progress in recovering our US infrastructure with our US data center partner,” TikTok USDS’s latest statement said. “However, the US user experience may still have some technical issues, including when posting new content. We’re committed to bringing TikTok back to its full capacity as soon as possible. We’ll continue to provide updates.”

TikTokers will notice subtle changes, expert says

For TikTok’s new owners, the tech issues risk confirming fears that Trump wasn’t joking when he said he’d like to see TikTok be tweaked to be “100 percent MAGA.”

Because of this bumpy transition, it seems likely that TikTok will continue to be heavily scrutinized once the USDS joint venture officially starts retraining the app on US data. As the algorithm undergoes tweaks, frequent TikTok users will likely be the first to pick up on subtle changes, especially if content unaligned with their political views suddenly starts appearing in their feeds when it never did before, Literat suggested.

Literat has researched both left- and right-leaning TikTok content. She told Ars that although left-leaning young users have for years loudly used the app to promote progressive views on topics like racial justice, gun reforms, or climate change, TikTok has never leaned one way or the other on the political spectrum.

Consider Christian or tradwife TikTok, Literat suggested, which grew huge platforms on TikTok alongside leftist bubbles advocating for LGBTQ+ rights or Palestine solidarity.

“Political life on TikTok is organized into overlapping sub-communities, each with its own norms, humor, and tolerance for disagreement,” Literat said, adding that “the algorithm creates bubbles, so people experience very different TikToks.”

Literat told Ars that she wasn’t surprised when Trump suggested that TikTok would be better if it were more right-wing. But what concerned her most was the implication that Trump viewed TikTok “as a potential propaganda apparatus” and “a tool for political capture rather than a space for authentic expression and connection.”

“The historical irony is thick: we went from ‘TikTok is dangerous because it’s controlled by the Chinese government and might manipulate American users’ to ‘TikTok should be controlled by American interests and explicitly aligned with a particular political agenda,’” Literat said. “The concern was never really about foreign influence or manipulation per se—it was about who gets to do the influencing.”

David Greene, senior counsel for the Electronic Frontier Foundation, which fought the TikTok ban law, told Ars that users are justified in feeling concerned. However, technical errors or content moderation mistakes are nearly always the most likely explanations for issues, and there’s no way to know “what’s actually happening.” He noted that lawmakers have shaped how some TikTok users view the app after insisting that they accept that China was influencing the algorithm without providing evidence.

“For years, TikTok users were being told that they just needed to follow these assumptions the government was making about the dangers of TikTok,” Greene said. And “now they’re doing the same thing, making these assumptions that it’s now maybe some content policy is being done either to please the Trump administration or being controlled by it. We conditioned TikTok users to basically to not have trust in the way decisions were made with the app.”

MAGA tweaks risks TikTok’s “death by a thousand cuts”

TikTok USDS likely wants to distance itself from Trump’s comments about making the app more MAGA. But new owners have deep ties with Trump, including Larry Ellison, the chief technology officer of Oracle, whom some critics suggest has benefited more than anyone else from Trump’s presidency. Greene noted that Trump’s son-in-law, Jared Kushner, is a key investor in Silver Lake. Both firms now have a 15 percent stake in the TikTok USDS joint venture, as well as MGX, which also seems to have Trump ties. CNBC reported MGX used the Trump family cryptocurrency, World Liberty Financial, to invest $2 billion in Binance shortly before Trump pardoned Binance’s CEO from money laundering charges, which some viewed as a possible quid pro quo.

Greene said that EFF warned during the Supreme Court fight over the TikTok divest-or-ban law that “all you were doing was substituting concerns for Chinese propaganda, for concerns for US propaganda. That it was highly likely that if you force a sale and the sale is up to the approval of the president, it’s going to be sold to President’s lackeys.”

“I don’t see how it’d be good for users or for democracy, for TikTok to have an editorial policy that would make Trump happy,” Greene said.

If suddenly, the app were tweaked to push more MAGA content into more feeds, young users who are critical of Trump wouldn’t all be brainwashed, Literat said. They would adapt, perhaps eventually finding other apps for activism.

However, TikTok may be hard to leave behind at a time when other popular apps seem to carry their own threats of political suppression, she suggested. Beyond the video-editing features that made TikTok a behemoth of social media, perhaps the biggest sticking point keeping users glued to TikTok is “fundamentally social,” Literat said.

“TikTok is where their communities are, where they’ve built audiences, where the conversations they care about are happening,” Literat said.

Rather than a mass exodus, Literat expects that TikTok’s fate could be “gradual erosion” or “death by a thousand cuts,” as users “likely develop workarounds, shift to other platforms for political content while keeping TikTok for entertainment, or create coded languages and aesthetic strategies to evade detection.”

CNN reported that one TikTok user already found that she could finally post an anti-ICE video after claiming to be a “fashion influencer” and speaking in code throughout the video, which criticized ICE for detaining a 5-year-old named Liam Conejo Ramos.

“Fashion influencing is in my blood,” she said in the video, which featured “a photo of Liam behind her,” CNN reported. “And even a company with bad customer service won’t keep me from doing my fashion review.”

Short-term, Literat thinks that longtime TikTok users experiencing inconsistent moderation will continue testing boundaries, documenting issues, and critiquing the app. That discussion will perhaps chill more speech on the platform, possibly even affecting the overall content mix appearing in feeds.

Long-term, however, TikTok’s changes under US owners “could fundamentally reshape TikTok’s role in political discourse.”

“I wouldn’t be surprised, unfortunately, if it suffers the fate of Twitter/X,” Literat said.

Literat told Ars that her TikTok research was initially sparked by a desire to monitor the “kind of authentic political expression the platform once enabled.” She worries that because user trust is now “damaged,” TikTok will never be the same.

“The tragedy is that TikTok genuinely was a space where young people—especially those from marginalized communities—could shape political conversations in ways that felt authentic and powerful,” Literat said. “I’m sad to say, I think that’s been irretrievably broken.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

TikTok users “absolutely justified” for fearing MAGA makeover, experts say Read More »

supreme-court-to-decide-how-1988-videotape-privacy-law-applies-to-online-video

Supreme Court to decide how 1988 videotape privacy law applies to online video


Salazar v. Paramount hinges on video privacy law’s definition of “consumer.”

Credit: Getty Images | Ernesto Ageitos

The Supreme Court is taking up a case on whether Paramount violated the 1988 Video Privacy Protection Act (VPPA) by disclosing a user’s viewing history to Facebook. The case, Michael Salazar v. Paramount Global, hinges on the law’s definition of the word “consumer.”

Salazar filed a class action against Paramount in 2022, alleging that it “violated the VPPA by disclosing his personally identifiable information to Facebook without consent,” Salazar’s petition to the Supreme Court said. Salazar had signed up for an online newsletter through 247Sports.com, a site owned by Paramount, and had to provide his email address in the process. Salazar then used 247Sports.com to view videos while logged in to his Facebook account.

“As a result, Paramount disclosed his personally identifiable information—including his Facebook ID and which videos he watched—to Facebook,” the petition said. “The disclosures occurred automatically because of the Facebook Pixel Paramount installed on its website. Facebook and Paramount then used this information to create and display targeted advertising, which increased their revenues.”

The 1988 law defines consumer as “any renter, purchaser, or subscriber of goods or services from a video tape service provider.” The phrase “video tape service provider” is defined to include providers of “prerecorded video cassette tapes or similar audio visual materials,” and thus arguably applies to more than just sellers of tapes.

The legal question for the Supreme Court “is whether the phrase ‘goods or services from a video tape service provider,’ as used in the VPPA’s definition of ‘consumer,’ refers to all of a video tape service provider’s goods or services or only to its audiovisual goods or services,” Salazar’s petition said. The Supreme Court granted his petition to hear the case in a list of orders released yesterday.

Courts disagree on defining “consumer”

The Facebook Pixel at the center of the lawsuit is now called the Meta Pixel. The Pixel is a piece of JavaScript code that can be added to a website to track visitors’ activity “and optimize your advertising performance,” as Meta describes it.

Salazar lost his case at a federal court in Nashville, Tennessee, and then lost an appeal at the US Court of Appeals for the 6th Circuit. (247Sports has its corporate address in Tennessee.) A three-judge panel of appeals court judges ruled 2–1 to uphold the district court ruling. The appeals court majority said:

The Video Privacy Protection Act—as the name suggests—arose out of a desire to protect personal privacy in the records of the rental, purchase, or delivery of “audio visual materials.” Spurred by the publication of Judge Robert Bork’s video rental history on the eve of his confirmation hearings, Congress imposed stiff penalties on any “video tape service provider” who discloses personal information that identifies one of their “consumers” as having requested specific “audio visual materials.”

This case is about what “goods or services” a person must rent, purchase, or subscribe to in order to qualify as a “consumer” under the Act. Is “goods or services” limited to audio-visual content—or does it extend to any and all products or services that a store could provide? Michael Salazar claims that his subscription to a 247Sports e-newsletter qualifies him as a “consumer.” But since he did not subscribe to “audio visual materials,” the district court held that he was not a “consumer” and dismissed the complaint. We agree and so AFFIRM.

2-2 circuit split

Salazar’s petition to the Supreme Court alleged that the 6th Circuit ruling “imposes a limitation that appears nowhere in the relevant statutory text.” The 6th Circuit analysis “flout[s] the ordinary meaning of ‘goods or services,’” and “ignores that the VPPA broadly prohibits a video tape service provider—like Paramount here—from knowingly disclosing ‘personally identifiable information concerning any consumer of such provider,’” he told the Supreme Court.

The DC Circuit ruled the same way as the 6th Circuit in another case last year, but other appeals courts have ruled differently. The 7th Circuit held last year that “any purchase or subscription from a ‘video tape service provider’ satisfies the definition of ‘consumer,’ even if the thing purchased is clothing or the thing subscribed to is a newsletter.”

In Salazar v. National Basketball Association, which also involves Michael Salazar, the 2nd Circuit ruled in 2024 that Salazar was a consumer under the VPPA because the law’s “text, structure, and purpose compel the conclusion that that phrase is not limited to audiovisual ‘goods or services,’ and the NBA’s online newsletter falls within the plain meaning of that phrase.” The NBA petitioned the Supreme Court for review in hopes of overturning the 2nd Circuit ruling, but the petition to hear the case was denied in December.

Despite the NBA case being rejected by the high court, a circuit split can make a case ripe for Supreme Court review. “Put simply, the circuit courts have divided 2–2 over how to interpret the statutory phrase ‘goods or services from a video tape service provider,’” Salazar told the court. “As a result, there is a 2–2 circuit split concerning what it takes to become a ‘consumer’ under the VPPA.”

Paramount urged SCOTUS to reject case

While Salazar sued both Paramount and the NBA, he said the Paramount case “is a superior vehicle for resolving this exceptionally important question.” The case against the NBA is still under appeal on a different legal issue and “has had multiple amended pleadings since the lower courts decided the question, meaning the Court could not answer the question based on the now-operative allegations,” his petition said. By contrast, the Paramount case has a final judgment, no ongoing proceedings, and “can be reviewed on the same record the lower courts considered.”

Paramount urged the court to decline Salazar’s petition. Despite the circuit split on the “consumer” question, Paramount said that Salazar’s claims would fail in the 2nd and 7th circuits for different reasons. Paramount argued that “computer code shared in targeted advertising does not qualify as ‘personally identifiable information,’” and that “247Sports is not a ‘video tape service provider’ in the first place.”

“247Sports does not rent, sell, or offer subscriptions to video tapes. Nor does it stream movies or shows,” Paramount said. “Rather, it is a sports news website with articles, photos, and video clips—and all of the content at issue in this case is available for free to anybody on the Internet. That is a completely different business from renting video cassette tapes. The VPPA does not address it.”

Paramount further argued that Salazar’s case isn’t a good vehicle to consider the “consumer” definition because his “complaint fails for multiple additional reasons that could complicate further review.”

Paramount wasn’t able to convince the Supreme Court that the case isn’t worth taking up, however. SCOTUSblog says that “the case will likely be scheduled for oral argument in the court’s 2026-27 term,” which begins in October 2026.

Photo of Jon Brodkin

Jon is a Senior IT Reporter for Ars Technica. He covers the telecom industry, Federal Communications Commission rulemakings, broadband consumer affairs, court cases, and government regulation of the tech industry.

Supreme Court to decide how 1988 videotape privacy law applies to online video Read More »

“ig-is-a-drug”:-internal-messages-may-doom-meta-at-social-media-addiction-trial

“IG is a drug”: Internal messages may doom Meta at social media addiction trial


Social media addiction test case

A loss could cost social media companies billions and force changes on platforms.

Mark Zuckerberg testifies during the US Senate Judiciary Committee hearing, “Big Tech and the Online Child Sexual Exploitation Crisis,” in 2024.

Anxiety, depression, eating disorders, and death. These can be the consequences for vulnerable kids who get addicted to social media, according to more than 1,000 personal injury lawsuits that seek to punish Meta and other platforms for allegedly prioritizing profits while downplaying child safety risks for years.

Social media companies have faced scrutiny before, with congressional hearings forcing CEOs to apologize, but until now, they’ve never had to convince a jury that they aren’t liable for harming kids.

This week, the first high-profile lawsuit—considered a “bellwether” case that could set meaningful precedent in the hundreds of other complaints—goes to trial. That lawsuit documents the case of a 19-year-old, K.G.M, who hopes the jury will agree that Meta and YouTube caused psychological harm by designing features like infinite scroll and autoplay to push her down a path that she alleged triggered depression, anxiety, self-harm, and suicidality.

TikTok and Snapchat were also targeted by the lawsuit, but both have settled. The Snapchat settlement came last week, while TikTok settled on Tuesday just hours before the trial started, Bloomberg reported.

For now, YouTube and Meta remain in the fight. K.G.M. allegedly started watching YouTube when she was 6 years old and joined Instagram by age 11. She’s fighting to claim untold damages—including potentially punitive damages—to help her family recoup losses from her pain and suffering and to punish social media companies and deter them from promoting harmful features to kids. She also wants the court to require prominent safety warnings on platforms to help parents be aware of the risks.

Platforms failed to blame mom for not reading TOS

A loss could cost social media companies billions, CNN reported.

To avoid that, platforms have alleged that other factors caused K.G.M.’s psychological harm—like school bullies and family troubles—while insisting that Section 230 and the First Amendment protect platforms from being blamed for any harmful content targeted to K.G.M.

They also argued that K.G.M.’s mom never read the terms of service and, therefore, supposedly would not have benefited from posted warnings. And ByteDance, before settling, seemingly tried to pass the buck by claiming that K.G.M. “already suffered mental health harms before she began using TikTok.”

But the judge, Carolyn B. Kuhl, wrote in a ruling denying all platforms’ motions for summary judgment that K.G.M. showed enough evidence that her claims don’t stem from content to go to trial.

Further, platforms can’t liken warnings buried in terms of service to prominently displayed warnings, Kuhl said, since K.G.M.’s mom testified she would have restricted the minor’s app usage if she were aware of the alleged risks.

Two platforms settling before the trial seems like a good sign for K.G.M. However, Snapchat has not settled other social media addiction lawsuits that it’s involved in, including one raised by school districts, and perhaps is waiting to see how K.G.M.’s case shakes out before taking further action.

To win, K.G.M.’s lawyers will need to “parcel out” how much harm is attributed to each platform, due to design features, not the content that was targeted to K.G.M., Clay Calvert, a technology policy expert and senior fellow at a think tank called the American Enterprise Institute, wrote. Internet law expert Eric Goldman told The Washington Post that detailing those harms will likely be K.G.M.’s biggest struggle, since social media addiction has yet to be legally recognized, and tracing who caused what harms may not be straightforward.

However, Matthew Bergman, founder of the Social Media Victims Law Center and one of K.G.M.’s lawyers, told the Post that K.G.M. is prepared to put up this fight.

“She is going to be able to explain in a very real sense what social media did to her over the course of her life and how in so many ways it robbed her of her childhood and her adolescence,” Bergman said.

Internal messages may be “smoking-gun evidence”

The research is unclear on whether social media is harmful for kids or whether social media addiction exists, Tamar Mendelson, a professor at Johns Hopkins Bloomberg School of Public Health, told the Post. And so far, research only shows a correlation between Internet use and mental health, Mendelson noted, which could doom K.G.M.’s case and others’.

However, social media companies’ internal research might concern a jury, Bergman told the Post. On Monday, the Tech Oversight Project, a nonprofit working to rein in Big Tech, published a report analyzing recently unsealed documents in K.G.M.’s case that supposedly provide “smoking-gun evidence” that platforms “purposefully designed their social media products to addict children and teens with no regard for known harms to their wellbeing”—while putting increased engagement from young users at the center of their business models.

In the report, Sacha Haworth, executive director of The Tech Oversight Project, accused social media companies of “gaslighting and lying to the public for years.”

Most of the recently unsealed documents highlighted in the report came from Meta, which also faces a trial from dozens of state attorneys general on social media addiction this year.

Those documents included an email stating that Mark Zuckerberg—who is expected to testify at K.G.M.’s trial—decided that Meta’s top priority in 2017 was teens who must be locked in to using the company’s family of apps.

The next year, a Facebook internal document showed that the company pondered letting “tweens” access a private mode inspired by the popularity of fake Instagram accounts teens know as “finstas.” That document included an “internal discussion on how to counter the narrative that Facebook is bad for youth and admission that internal data shows that Facebook use is correlated with lower well-being (although it says the effect reverses longitudinally).”

Other allegedly damning documents showed Meta seemingly bragging that “teens can’t switch off from Instagram even if they want to” and an employee declaring, “oh my gosh yall IG is a drug,” likening all social media platforms to “pushers.”

Similarly, a 2020 Google document detailed the company’s plan to keep kids engaged “for life,” despite internal research showing young YouTube users were more likely to “disproportionately” suffer from “habitual heavy use, late night use, and unintentional use” deteriorating their “digital well-being.”

Shorts, YouTube’s feature that rivals TikTok, also is a concern for parents suing, and three years later, documents showed Google choosing to target teens with Shorts, despite research flagging that the “two biggest challenges for teen wellbeing on YouTube” were prominently linked to watching shorts. Those challenges included Shorts bombarding teens with “low quality content recommendations that can convey & normalize unhealthy beliefs or behaviors” and teens reporting that “prolonged unintentional use” was “displacing valuable activities like time with friends or sleep.”

Bergman told the Post that these documents will help the jury decide if companies owed young users better protections sooner but prioritized profits while pushing off interventions that platforms have more recently introduced amid mounting backlash.

“Internal documents that have been held establishing the willful misconduct of these companies are going to—for the first time—be given a public airing,” Bergman said. “The public is going to know for the first time what social media companies have done to prioritize their profits over the safety of our kids.”

Platforms failed to get experts’ testimony tossed

One seeming advantage K.G.M. has heading into the trial is that tech companies failed to get expert testimony dismissed that backs up her claims.

Platforms tried to exclude testimony from several experts, including Kara Bagot, a board-certified adult, child, and adolescent psychiatrist, as well as Arturo Bejar, a former Meta safety researcher and whistleblower. They claimed that experts’ opinions were irrelevant because they were based on K.G.M.’s interactions with content. They also suggested that child safety experts’ opinions “violate the standards of reliability” since the causal links they draw don’t account for “alternative explanations” and allegedly “contradict the experts’ own statements in non-litigation contexts.”

However, Kuhl ruled that platforms will have the opportunity to counter experts’ opinions at trial, while reminding social media companies that “ultimately, the critical question of causation is one that must be determined by the jury.” Only one expert’s testimony was excluded, Social Media Victims Law Center noted, a licensed clinical psychologist deemed unqualified.

“Testimony by Bagot as to design features that were employed on TikTok as well as on other social media platforms is directly relevant to the question of whether those design features cause the type of harms allegedly suffered by K.G.M. here,” Kuhl wrote.

That means that a jury will get a chance to weigh Bagot’s opinion that “social media overuse and addiction causes or plays a substantial role in causing or exacerbating psychopathological harms in children and youth, including depression, anxiety and eating disorders, as well as internalizing and externalizing psychopathological symptoms.”

The jury will also consider the insights and information Bejar (a fact witness and former consultant for the company) will share about Meta’s internal safety studies. That includes hearing about “his personal knowledge and experience related to how design defects on Meta’s platforms can cause harm to minors (e.g., age verification, reporting processes, beauty filters, public like counts, infinite scroll, default settings, private messages, reels, ephemeral content, and connecting children with adult strangers),” as well as “harms associated with Meta’s platforms including addiction/problematic use, anxiety, depression, eating disorders, body dysmorphia, suicidality, self-harm, and sexualization.” 

If K.G.M. can convince the jury that she was not harmed by platforms’ failure to remove content but by companies “designing their platforms to addict kids” and “developing algorithms that show kids not what they want to see but what they cannot look away from,” Bergman thinks her case could become a “data point” for “settling similar cases en masse,” he told Barrons.

“She is very typical of so many children in the United States—the harms that they’ve sustained and the way their lives have been altered by the deliberate design decisions of the social media companies,” Bergman told the Post.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

“IG is a drug”: Internal messages may doom Meta at social media addiction trial Read More »

“wildly-irresponsible”:-dot’s-use-of-ai-to-draft-safety-rules-sparks-concerns

“Wildly irresponsible”: DOT’s use of AI to draft safety rules sparks concerns

At DOT, Trump likely hopes to see many rules quickly updated to modernize airways and roadways. In a report highlighting the Office of Science and Technology Policy’s biggest “wins” in 2025, the White House credited DOT with “replacing decades-old rules with flexible, innovation-friendly frameworks,” including fast-tracking rules to allow for more automated vehicles on the roads.

Right now, DOT expects that Gemini can be relied on to “handle 80 to 90 percent of the work of writing regulations,” ProPublica reported. Eventually all federal workers who rely on AI tools like Gemini to draft rules “would fall back into merely an oversight role, monitoring ‘AI-to-AI interactions,’” ProPublica reported.

Google silent on AI drafting safety rules

Google did not respond to Ars’ request to comment on this use case for Gemini, which could spread across government under Trump’s direction.

Instead, the tech giant posted a blog on Monday, pitching Gemini for government more broadly, promising federal workers that AI would help with “creative problem-solving to the most critical aspects of their work.”

Google has been competing with AI rivals for government contracts, undercutting OpenAI and Anthropic’s $1 deals by offering a year of access to Gemini for $0.47.

The DOT contract seems important to Google. In a December blog, the company celebrated that DOT was “the first cabinet-level agency to fully transition its workforce away from legacy providers to Google Workspace with Gemini.”

At that time, Google suggested this move would help DOT “ensure the United States has the safest, most efficient, and modern transportation system in the world.”

Immediately, Google encouraged other federal leaders to launch their own efforts using Gemini.

“We are committed to supporting the DOT’s digital transformation and stand ready to help other federal leaders across the government adopt this blueprint for their own mission successes,” Google’s blog said.

DOT did not immediately respond to Ars’ request for comment.

“Wildly irresponsible”: DOT’s use of AI to draft safety rules sparks concerns Read More »

data-center-power-outage-took-out-tiktok-first-weekend-under-us-ownership

Data center power outage took out TikTok first weekend under US ownership

As the app comes back online, users have also taken note that TikTok is collecting more of their data under US control. As Wired reported, TikTok asked US users to agree to a new terms of service and privacy policy, which allows TikTok to potentially collect “more detailed information about its users, including precise location data.”

“Before this update, the app did not collect the precise, GPS-derived location data of US users,” Wired reported. “Now, if you give TikTok permission to use your phone’s location services, then the app may collect granular information about your exact whereabouts.”

New policies also pushed users to agree to share all their AI interactions, which allows TikTok to store their metadata and trace AI inputs back to specific accounts.

Already seeming more invasive and less reliable, for TikTok users, questions likely remain how much their favorite app might change under new ownership, as the TikTok USDS Joint Venture prepares to retrain the app’s algorithm.

Trump has said that he wants to see the app become “100 percent MAGA,” prompting fears that “For You” pages might soon be flooded with right-wing content or that leftist content like anti-ICE criticism might be suppressed. And The Information reported in July that transferring millions of users over to the US-trained app is expected to cause more “technical issues.”

Data center power outage took out TikTok first weekend under US ownership Read More »

eu-launches-formal-investigation-of-xai-over-grok’s-sexualized-deepfakes

EU launches formal investigation of xAI over Grok’s sexualized deepfakes

The European probe comes after UK media regulator Ofcom opened a formal investigation into Grok, while Malaysia and Indonesia have banned the chatbot altogether.

Following the backlash, xAI restricted the use of Grok to paying subscribers and said it has “implemented technological measures” to limit Grok from generating certain sexualized images.

Musk has also said “anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content.”

An EU official said that “with the harm that is exposed to individuals that are subject to these images, we have not been convinced so far by what mitigating measures the platform has taken to have that under control.”

The company, which acquired Musk’s social media site X last year, has designed its AI products to have fewer content “guardrails” than competitors such as OpenAI and Google. Musk called its Grok model “maximally truth-seeking.”

The commission fined X €120 million in December last year for breaching its regulations for transparency, providing insufficient access to data and the deceptive design of its blue ticks for verified accounts.

The fine was criticized by Musk and the US government, with the Trump administration claiming the EU was unfairly targeting American groups and infringing freedom of speech principles championed by the Maga movement.

X did not immediately reply to a request for comment.

© 2026 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

EU launches formal investigation of xAI over Grok’s sexualized deepfakes Read More »

poland’s-energy-grid-was-targeted-by-never-before-seen-wiper-malware

Poland’s energy grid was targeted by never-before-seen wiper malware

Researchers on Friday said that Poland’s electric grid was targeted by wiper malware, likely unleashed by Russia state hackers, in an attempt to disrupt electricity delivery operations.

A cyberattack, Reuters reported, occurred during the last week of December. The news organization said it was aimed at disrupting communications between renewable installations and the power distribution operators but failed for reasons not explained.

Wipers R Us

On Friday, security firm ESET said the malware responsible was a wiper, a type of malware that permanently erases code and data stored on servers with the goal of destroying operations completely. After studying the tactics, techniques, and procedures (TTPs) used in the attack, company researchers said the wiper was likely the work of a Russian government hacker group tracked under the name Sandworm.

“Based on our analysis of the malware and associated TTPs, we attribute the attack to the Russia-aligned Sandworm APT with medium confidence due to a strong overlap with numerous previous Sandworm wiper activity we analyzed,” said ESET researchers. “We’re not aware of any successful disruption occurring as a result of this attack.”

Sandworm has a long history of destructive attacks waged on behalf of the Kremlin and aimed at adversaries. Most notable was one in Ukraine in December 2015. It left roughly 230,000 people without electricity for about six hours during one of the coldest months of the year. The hackers used general purpose malware known as BlackEnergy to penetrate power companies’ supervisory control and data acquisition systems and, from there, activate legitimate functionality to stop electricity distribution. The incident was the first known malware-facilitated blackout.

Poland’s energy grid was targeted by never-before-seen wiper malware Read More »

dhs-keeps-trying-and-failing-to-unmask-anonymous-ice-critics-online

DHS keeps trying and failing to unmask anonymous ICE critics online

The Department of Homeland Security (DHS) has backed down from a fight to unmask the owners of Instagram and Facebook accounts monitoring Immigration and Customs Enforcement (ICE) activity in Pennsylvania.

One of the anonymous account holders, John Doe, sued to block ICE from identifying him and other critics online through summonses to Meta that he claimed infringed on core First Amendment-protected activity.

DHS initially fought Doe’s motion to quash the summonses, arguing that the community watch groups endangered ICE agents by posting “pictures and videos of agents’ faces, license plates, and weapons, among other things.” This was akin to “threatening ICE agents to impede the performance of their duties,” DHS alleged. DHS’s arguments echoed DHS Secretary Kristi Noem, who has claimed that identifying ICE agents is a crime, even though Wired noted that ICE employees often post easily discoverable LinkedIn profiles.

To Doe, the agency seemed intent on testing the waters to see if it could seize authority to unmask all critics online by invoking a customs statute that allows agents to subpoena information on goods entering or leaving the US.

But then, on January 16, DHS abruptly reversed course, withdrawing its summonses from Meta.

A court filing confirmed that DHS dropped its requests for subscriber information last week, after initially demanding Doe’s “postal code, country, all email address(es) on file, date of account creation, registered telephone numbers, IP address at account signup, and logs showing IP address and date stamps for account accesses.”

The filing does not explain why DHS decided to withdraw its requests.

However, previously, DHS requested similar information from Meta about six Instagram community watch groups that shared information about ICE activity in Los Angeles and other locations. DHS withdrew those requests, too, after account holders defended their First Amendment rights and filed motions to quash their summonses, Doe’s court filing said.

DHS keeps trying and failing to unmask anonymous ICE critics online Read More »

white-house-alters-arrest-photo-of-ice-protester,-says-“the-memes-will-continue”

White House alters arrest photo of ICE protester, says “the memes will continue”

Protesters disrupted services on Sunday at the Cities Church in St. Paul, chanting “ICE OUT” and “Justice for Renee Good.” The St. Paul Pioneer Press quoted Levy Armstrong as saying, “When you think about the federal government unleashing barbaric ICE agents upon our community and all the harm that they have caused, to have someone serving as a pastor who oversees these ICE agents is almost unfathomable to me.”

The church website lists David Easterwood as one of its pastors. Protesters said this is the same David Easterwood who is listed as a defendant in a lawsuit that Minnesota Attorney General Keith Ellison filed against Noem and other federal officials. The lawsuit lists Easterwood as a defendant “in his official capacity as Acting Director, Saint Paul Field Office, U.S. Immigration and Customs Enforcement.”

Levy Armstrong, who is also a former president of the NAACP’s Minneapolis branch, was arrested yesterday morning. Announcing the arrest, Attorney General Pam Bondi wrote, “WE DO NOT TOLERATE ATTACKS ON PLACES OF WORSHIP.” Bondi alleged that Levy Armstrong “played a key role in organizing the coordinated attack on Cities Church in St. Paul, Minnesota.”

Multiple arrests

Noem said Levy Armstrong “is being charged with a federal crime under 18 USC 241,” which prohibits “conspir[ing] to injure, oppress, threaten, or intimidate any person in any State, Territory, Commonwealth, Possession, or District in the free exercise or enjoyment of any right or privilege secured to him by the Constitution or laws of the United States.”

“Religious freedom is the bedrock of the United States—there is no first amendment right to obstruct someone from practicing their religion,” Noem wrote.

St. Paul School Board member Chauntyll Allen was also arrested. Attorneys for the Cities Church issued statements supporting the arrests and saying they “are exploring all legal options to protect the church and prevent further invasions.”

A federal magistrate judge initially ruled that Levy Armstrong and Allen could be released, but they were still being held last night after the government “made a motion to stay the release for further review, claiming they might be flight risks,” the Pioneer Press wrote.

White House alters arrest photo of ICE protester, says “the memes will continue” Read More »

tiktok-deal-is-done;-trump-wants-praise-while-users-fear-maga-tweaks

TikTok deal is done; Trump wants praise while users fear MAGA tweaks


US will soon retrain TikTok

“I am so happy”: Trump closes deal that hands TikTok US to his allies.

The TikTok deal is done, and Donald Trump is claiming a win, although it remains unclear if the joint venture he arranged with ByteDance and the Chinese government actually resolves Congress’ national security concerns.

In a press release Thursday, TikTok announced the “TikTok USDS Joint Venture LLC,” an entity established to keep TikTok operating in the US.

Giving Americans majority ownership, ByteDance retains 19.9 percent of the joint venture, the release said, which has been valued at $14 billion. Three managing investors—Silver Lake, Oracle, and MGX—each hold 15 percent, while other investors, including Dell Technologies CEO Michael Dell’s investment firm, Dell Family Office, hold smaller, undisclosed stakes.

Americans will also have majority control over the joint venture’s seven-member board. TikTok CEO Shou Chew holds ByteDance’s only seat. Finalizing the deal was a “great move,” Chew told TikTok employees in an internal memo, The New York Times reported.

Two former TikTok employees will lead the joint venture. Adam Presser, who previously served as TikTok’s global head of Operations and Trust & Safety, has been named CEO. And Kim Farrell, TikTok’s former global head of Business Operations Protection, will serve as chief security officer.

Trump has claimed the deal meets requirements for “qualified divestiture” to avoid a TikTok ban otherwise required under the Protecting Americans from Foreign Adversary Controlled Applications Act. However, questions remain, as lawmakers have not yet analyzed the terms of the deal to determine whether that’s true.

The law requires the divestment “to end any ‘operational relationship’ between ByteDance and TikTok in the United States,” critics told the NYT. That could be a problem, since TikTok’s release makes it clear that ByteDance will maintain some control over the TikTok US app’s operations.

For example, while the US owners will retrain the algorithm and manage data security, ByteDance owns the algorithm and “will manage global product interoperability and certain commercial activities, including e-commerce, advertising, and marketing.” The Trump administration seemingly agreed to these terms to ensure that the US TikTok isn’t cut off from the rest of the world on the app.

“Interoperability enables the Joint Venture to provide US users with a global TikTok experience, ensuring US creators can be discovered and businesses can operate on a global scale,” the release said.

Perhaps also concerning to Congress, Slate noted, while ByteDance may be a minority owner, it remains the largest individual shareholder.

Michael Sobolik, an expert on US-China policy and senior fellow at the right-leaning think tank the Hudson Institute, told the NYT that the Trump administration “may have saved TikTok, but the national security concerns are still going to continue.”

Some critics, including Republicans, have vowed to scrutinize the deal.

On Thursday, Senator Edward Markey (D-Mass.) complained that the White House had repeatedly denied requests for information about the deal. They’ve provided “virtually no details about this agreement, including whether TikTok’s algorithm is truly free of Chinese influence,” Markey said.

“This lack of transparency reeks,” Markey said. “Congress has a responsibility to investigate this deal, demand transparency, and ensure that any arrangement truly protects national security while keeping TikTok online.”

In December, Representative John Moolenaar (R-Mich.), chair of the House Select Committee on China, said that he wants to hold a hearing with TikTok leadership to discuss how the deal addresses national security concerns. On Thursday, Moolenaar said he “has two specific questions for TikTok’s new American owners,” Punchbowl News reported.

“Can we ensure that the algorithm is not influenced by the Chinese Communist Party?” Moolenaar said. “And two, can we ensure that the data of Americans is secure?”

Moolenaar may be satisfied by the terms, as the NYT suggested that China hawks in Washington appeared to trust that Trump’s arrangement is a qualified divestiture. TikTok’s release said that Oracle will protect US user data in a secure US cloud data environment that will regularly be audited by third-party cybersecurity experts. The algorithm will be licensed from ByteDance and retrained on US user data, the release said, and Vice President JD Vance has confirmed that the joint venture “will have control over how the algorithm pushes content to users.”

Last September, a spokesperson for the House China Committee told Politico that “any agreement must comply with the historic bipartisan law passed last year to protect the American people, including the complete divestment of ByteDance control and a fully decoupled algorithm.”

Users brace for MAGA tweaks to algorithm

“I am so happy to have helped in saving TikTok!” Trump said on Truth Social after the deal was finalized. “It will now be owned by a group of Great American Patriots and Investors, the Biggest in the World, and will be an important Voice.”

However, it’s unclear to TikTokers how the app might change as Trump allies take control of the addictive algorithm that drew millions to the app. Lawmakers had feared the Chinese Communist Party could influence the algorithm to target US users with propaganda, and Trump’s deal was supposed to mitigate that.

Not only do critics worry that if ByteDance maintains ownership of the algorithm, it could allow the company to continue to influence content, but there is now concern that the app’s recommendations could take a right-leaning slant under US control.

Trump has already said that he’d like to see TikTok go “100 percent MAGA,” and his allies will now be in charge of “deciding which posts to leave up and which to take down,” the NYT noted. Anupam Chander, a law and technology professor at Georgetown University, told the NYT that the TikTok deal offered Trump and his allies “more theoretical room for one side’s views to get a greater airing.”

“My worry all along is that we may have traded fears of foreign propaganda for the reality of domestic propaganda,” Chander said.

For business owners who rely on the app, there’s also the potential that the app could be glitchy after US owners start porting data and retraining the algorithm.

Trump clearly hopes the deal will endear him to TikTok users. He sought praise on Truth Social, writing, “I only hope that long into the future I will be remembered by those who use and love TikTok.”

China “played” Trump, expert says

So far, the Chinese government has not commented on the deal’s finalization, but Trump thanked Chinese President Xi Jinping in his Truth Social post “for working with us and, ultimately, approving the Deal.”

“He could have gone the other way, but didn’t, and is appreciated for his decision,” Trump said.

Experts have suggested that China benefits from the deal by keeping the most lucrative part of TikTok while the world watches it export its technology to the US.

When Trump first announced the deal in September, critics immediately attacked him for letting China keep the algorithm. One US advisor close to the deal told the Financial Times that “Trump always chickens out,” noting that “after all this, China keeps the algorithm.”

On Thursday, Sobolik told Politico that Trump “got played” by Xi after taking “terrible advice from his staff” during trade negotiations that some critics said gave China the upper hand.

Trump sees things differently, writing on Truth Social that the TikTok deal came to “a very dramatic, final, and beautiful conclusion.”

Whether the deal is “dramatic,” “final,” or “beautiful” depends on who you ask, though, as it could face legal challenges and disrupt TikTok’s beloved content feeds. The NYT suggested that the deal took so long to finalize that TikTokers don’t even care anymore, while several outlets noted that Trump’s deal is very close to the Project Texas arrangement that Joe Biden pushed until it was deemed inadequate to address national security risks.

Through Project Texas, Oracle was supposed to oversee TikTok US user data, auditing for security risks while ByteDance controlled the code. The joint venture’s “USDS” “coinage even originated from Project Texas,” Slate noted.

Lindsay Gorman, a former senior advisor in the Biden administration, told NYT that “we’ve gone round and round and ended up not too far from where we started.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

TikTok deal is done; Trump wants praise while users fear MAGA tweaks Read More »

asking-grok-to-delete-fake-nudes-may-force-victims-to-sue-in-musk’s-chosen-court

Asking Grok to delete fake nudes may force victims to sue in Musk’s chosen court


Millions likely harmed by Grok-edited sex images as X advertisers shrugged.

Journalists and advocates have been trying to grasp how many victims in total were harmed by Grok’s nudifying scandal after xAI delayed restricting outputs and app stores refused to cut off access for days.

The latest estimates show that perhaps millions were harmed in the days immediately after Elon Musk promoted Grok’s undressing feature on his own X feed by posting a pic of himself in a bikini.

Over just 11 days after Musk’s post, Grok sexualized more than 3 million images, of which 23,000 were of children, the Center for Countering Digital Hate (CCDH) estimated in research published Thursday.

That figure may be inflated, since CCDH did not analyze prompts and could not determine if images were already sexual prior to Grok’s editing. However, The New York Times shared the CCDH report alongside its own analysis, conservatively estimating that about 41 percent (1.8 million) of 4.4 million images Grok generated between December 31 and January 8 sexualized men, women, and children.

For xAI and X, the scandal brought scrutiny, but it also helped spike X engagement at a time when Meta’s rival app, Threads, has begun inching ahead of X in daily usage by mobile device users, TechCrunch reported. Without mentioning Grok, X’s head of product, Nikita Bier, celebrated the “highest engagement days on X” in an X post on January 6, just days before X finally started restricting some of Grok’s outputs for free users.

Whether or not xAI intended the Grok scandal to surge X and Grok use, that appears to be the outcome. The Times charted Grok trends and found that in the nine days prior to Musk’s post, combined, Grok was only used about 300,000 times to generate images, but after Musk’s post, “the number of images created by Grok surged to nearly 600,000 per day” on X.

In an article declaring that “Elon Musk cannot get away with this,” writers for The Atlantic suggested that X users “appeared to be imitating and showing off to one another,” believing that using Grok to create revenge porn “can make you famous.”

X has previously warned that X users who generate illegal content risk permanent suspensions, but X has not confirmed if any users have been banned since public outcry over Grok’s outputs began. Ars asked and will update this post if X provides any response.

xAI fights victim who begged Grok to remove images

At first, X only limited Grok’s image editing for some free users, which The Atlantic noted made it seem like X was “essentially marketing nonconsensual sexual images as a paid feature of the platform.”

But then, on January 14, X took its strongest action to restrict Grok’s harmful outputs—blocking outputs prompted by both free and paid X users. That move came after several countries, perhaps most notably the United Kingdom, and at least one state, California, launched probes.

Crucially, X’s updates did not apply to the Grok app or website; however, it can reportedly still be used to generate nonconsensual images.

That’s a problem for victims targeted by X users, according to Carrie Goldberg, a lawyer representing Ashley St. Clair, one of the first Grok victims to sue xAI; St. Clair also happens to be the mother of one of Musk’s children.

Goldberg told Ars that victims like St. Clair want changes on all Grok platforms, not just X. But it’s not easy to “compel that kind of product change in a lawsuit,” Goldberg said. That’s why St. Clair is hoping the court will agree that Grok is a public nuisance, a claim that provides some injunctive relief to prevent broader social harms if she wins.

Currently, St. Clair is seeking a temporary injunction that would block Grok from generating harmful images of her. But before she can get that order, if she wants a fair shot at winning the case, St. Clair must fight an xAI push counter-suing her and trying to move her lawsuit into Musk’s preferred Texas court, a recent court filing suggests.

In that fight, xAI is arguing that St. Clair is bound by xAI’s terms of service, which were updated the day after she notified the company of her intent to sue.

Alarmingly, xAI argued that St. Clair effectively agreed to the TOS when she started prompting Grok to delete her nonconsensual images—which is the only way X users had to get images removed quickly, St. Clair alleged. It seems xAI is hoping to turn moments of desperation, where victims beg Grok to remove images, into a legal shield.

In the filing, Goldberg wrote that St. Clair’s lawsuit has nothing to do with her own use of Grok, noting that the harassing images could have been made even if she never used any of xAI’s products. For that reason alone, xAI should not be able to force a change in venue.

Further, St. Clair’s use of Grok was clearly under duress, Goldberg argued, noting that one of the photos that Grok edited showed St. Clair’s toddler’s backpack.

“REMOVE IT!!!” St. Clair asked Grok, allegedly feeling increasingly vulnerable every second the images remained online.

Goldberg wrote that Barry Murphy, an X Safety employee, provided an affidavit that claimed that this instance and others of St. Clair “begging @Grok to remove illegal content constitutes an assent to xAI’s TOS.”

But “such cannot be the case,” Goldberg argued.

Faced with “the implicit threat that Grok would keep the images of St. Clair online and, possibly, create more of them,” St. Clair had little choice but to interact with Grok, Goldberg argued. And that prompting should not gut protections under New York law that St. Clair seeks to claim in her lawsuit, Goldberg argued, asking the court to void St. Clair’s xAI contract and reject xAI’s motion to switch venues.

Should St. Clair win her fight to keep the lawsuit in New York, the case could help set precedent for perhaps millions of other victims who may be contemplating legal action but fear facing xAI in Musk’s chosen court.

“It would be unjust to expect St. Clair to litigate in a state so far from her residence, and it may be so that trial in Texas will be so difficult and inconvenient that St. Clair effectively will be deprived of her day in court,” Goldberg argued.

Grok may continue harming kids

The estimated volume of sexualized images reported this week is alarming because it suggests that Grok, at the peak of the scandal, may have been generating more child sexual abuse material (CSAM) than X finds on its platform each month.

In 2024, X Safety reported 686,176 instances of CSAM to the National Center for Missing and Exploited Children, which, on average, is about 57,000 CSAM reports each month. If the CCDH’s estimate of 23,000 Grok outputs that sexualize children over an 11-day span is accurate, then an average monthly total may have exceeded 62,000 if Grok was left unchecked.

NCMEC did not immediately respond to Ars’ request to comment on how the estimated volume of Grok’s CSAM compares to X’s average CSAM reporting. But NCMEC previously told Ars that “whether an image is real or computer-generated, the harm is real, and the material is illegal.” That suggests Grok could remain a thorn in NCMEC’s side, as the CCDH has warned that even when X removes harmful Grok posts, “images could still be accessed via separate URLs,” suggesting that Grok’s CSAM and other harmful outputs could continue spreading. The CCDH also found instances of alleged CSAM that X had not removed as of January 15.

This is why child safety experts have advocated for more testing to ensure that AI tools like Grok don’t roll out capabilities like the undressing feature. NCMEC previously told Ars that “technology companies have a responsibility to prevent their tools from being used to sexualize or exploit children.” Amid a rise in AI-generated CSAM, the UK’s Internet Watch Foundation similarly warned that “it is unacceptable that technology is released which allows criminals to create this content.”

xAI advertisers, investors, partners remain silent

Yet, for Musk and xAI, there have been no meaningful consequences for Grok’s controversial outputs.

It’s possible that recently launched probes will result in legal action in California or fines in the UK or elsewhere, but those investigations will likely take months to conclude.

While US lawmakers have done little to intervene, some Democratic senators have attempted to ask Google and Apple CEOs why X and the Grok app were never restricted in their app stores, demanding a response by January 23. One day ahead of that deadline, senators confirmed to Ars that they’ve received no responses.

Unsurprisingly, neither Google nor Apple responded to Ars’ request to confirm whether a response is forthcoming or provide any statements on their decisions to keep the apps accessible. Both companies have been silent for weeks, along with other Big Tech companies that appear to be afraid to speak out against Musk’s chatbot.

Microsoft and Oracle, which “run Grok on their cloud services,” as well as Nvidia and Advanced Micro Devices, “which sell xAI the computer chips needed to train and run Grok,” declined The Atlantic’s request to comment on how the scandal has impacted their decisions to partner with xAI. Additionally, a dozen of xAI’s key investors simply didn’t respond when The Atlantic asked if “they would continue partnering with xAI absent the company changing its products.”

Similarly, dozens of advertisers refused Popular Information’s request to explain why there was no ad boycott over the Grok CSAM reports. That includes companies that once boycotted X over an antisemitic post from Musk, like “Amazon, Microsoft, and Google, all of which have advertised on X in recent days,” Popular Information reported.

It’s possible that advertisers fear Musk’s legal wrath if they boycott his platforms. The CCDH overcame a lawsuit from Musk last year, but that’s pending an appeal. And Musk’s so-called “thermonuclear” lawsuit against advertisers remains ongoing, with a trial date set for this October.

The Atlantic suggested that xAI stakeholders are likely hoping the Grok scandal will blow over and they’ll escape unscathed by staying silent. But so far, backlash has seemed to remain strong, perhaps because, while “deepfakes are not new,” xAI “has made them a dramatically larger problem than ever before,” The Atlantic opined.

“One of the largest forums dedicated to making fake images of real people,” Mr. Deepfakes, shut down in 2024 after public backlash over 43,000 sexual deepfake videos depicting about 3,800 individuals, the NYT reported. If the most recent estimates of Grok’s deepfakes are accurate, xAI shows how much more damage can be done when nudifying becomes a feature of one of the world’s biggest social networks, and nobody who has the power to stop it moves to intervene.

“This is industrial-scale abuse of women and girls,” Imran Ahmed, the CCDH’s chief executive, told NYT. “There have been nudifying tools, but they have never had the distribution, ease of use or the integration into a large platform that Elon Musk did with Grok.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Asking Grok to delete fake nudes may force victims to sue in Musk’s chosen court Read More »

judge-orders-stop-to-fbi-search-of-devices-seized-from-washington-post-reporter

Judge orders stop to FBI search of devices seized from Washington Post reporter

The Post asked for an expedited briefing and hearing schedule. Porter ordered the government to file a reply by January 28 and scheduled oral arguments for February 6.

Post: “Government refused” to stop search

FBI agents reportedly seized Natanson’s phone, a 1TB portable hard drive, a device for recording interviews, a Garmin watch, a personal laptop, and a laptop issued by The Washington Post. Natanson has said she’s built up a contact list of 1,100 current and former government employees and communicates with them in encrypted Signal chats.

“The day the FBI raided Natanson’s residence, undersigned counsel reached out to the government to advise that the seized items contain materials protected by the First Amendment and the attorney-client privileges,” attorneys for The Washington Post and Natanson told the court. “Undersigned counsel asked the government to refrain from reviewing the documents pending judicial resolution of the dispute, but the government refused.”

The filing said that unless a standstill order is issued, “the government will commence an unrestrained search of a journalist’s work product that violates the First Amendment and the attorney-client privilege, ignores federal statutory safeguards for journalists, and threatens the trust and confidentiality of sources.”

The six devices seized from Natanson “contain essentially her entire professional universe: more than 30,000 Post emails from the last year alone, confidential information from and about sources (including her sources and her colleagues’ sources), recordings of interviews, notes on story concepts and ideas, drafts of potential stories, communications with colleagues about sources and stories, and The Post’s content management system that houses all articles in progress,” the Post said. “The devices also housed Natanson’s encrypted Signal messaging platform that she used to communicate with her more than 1,100 sources. Without her devices, she ‘literally cannot contact’ these sources.”

Judge orders stop to FBI search of devices seized from Washington Post reporter Read More »