Meta

judge-on-meta’s-ai-training:-“i-just-don’t-understand-how-that-can-be-fair-use”

Judge on Meta’s AI training: “I just don’t understand how that can be fair use”


Judge downplayed Meta’s “messed up” torrenting in lawsuit over AI training.

A judge who may be the first to rule on whether AI training data is fair use appeared skeptical Thursday at a hearing where Meta faced off with book authors over the social media company’s alleged copyright infringement.

Meta, like most AI companies, holds that training must be deemed fair use, or else the entire AI industry could face immense setbacks, wasting precious time negotiating data contracts while falling behind global rivals. Meta urged the court to rule that AI training is a transformative use that only references books to create an entirely new work that doesn’t replicate authors’ ideas or replace books in their markets.

At the hearing that followed after both sides requested summary judgment, however, Judge Vince Chhabria pushed back on Meta attorneys arguing that the company’s Llama AI models posed no threat to authors in their markets, Reuters reported.

“You have companies using copyright-protected material to create a product that is capable of producing an infinite number of competing products,” Chhabria said. “You are dramatically changing, you might even say obliterating, the market for that person’s work, and you’re saying that you don’t even have to pay a license to that person.”

Declaring, “I just don’t understand how that can be fair use,” the shrewd judge apparently stoked little response from Meta’s attorney, Kannon Shanmugam, apart from a suggestion that any alleged threat to authors’ livelihoods was “just speculation,” Wired reported.

Authors may need to sharpen their case, which Chhabria warned could be “taken away by fair use” if none of the authors suing, including Sarah Silverman, Ta-Nehisi Coates, and Richard Kadrey, can show “that the market for their actual copyrighted work is going to be dramatically affected.”

Determined to probe this key question, Chhabria pushed authors’ attorney, David Boies, to point to specific evidence of market harms that seemed noticeably missing from the record.

“It seems like you’re asking me to speculate that the market for Sarah Silverman’s memoir will be affected by the billions of things that Llama will ultimately be capable of producing,” Chhabria said. “And it’s just not obvious to me that that’s the case.”

But if authors can prove fears of market harms are real, Meta might struggle to win over Chhabria, and that could set a precedent impacting copyright cases challenging AI training on other kinds of content.

The judge repeatedly appeared to be sympathetic to authors, suggesting that Meta’s AI training may be a “highly unusual case” where even though “the copying is for a highly transformative purpose, the copying has the high likelihood of leading to the flooding of the markets for the copyrighted works.”

And when Shanmugam argued that copyright law doesn’t shield authors from “protection from competition in the marketplace of ideas,” Chhabria resisted the framing that authors weren’t potentially being robbed, Reuters reported.

“But if I’m going to steal things from the marketplace of ideas in order to develop my own ideas, that’s copyright infringement, right?” Chhabria responded.

Wired noted that he asked Meta’s lawyers, “What about the next Taylor Swift?” If AI made it easy to knock off a young singer’s sound, how could she ever compete if AI produced “a billion pop songs” in her style?

In a statement, Meta’s spokesperson reiterated the company’s defense that AI training is fair use.

“Meta has developed transformational open source AI models that are powering incredible innovation, productivity, and creativity for individuals and companies,” Meta’s spokesperson said. “Fair use of copyrighted materials is vital to this. We disagree with Plaintiffs’ assertions, and the full record tells a different story. We will continue to vigorously defend ourselves and to protect the development of GenAI for the benefit of all.”

Meta’s torrenting seems “messed up”

Some have pondered why Chhabria appeared so focused on market harms, instead of hammering Meta for admittedly illegally pirating books that it used for its AI training, which seems to be obvious copyright infringement. According to Wired, “Chhabria spoke emphatically about his belief that the big question is whether Meta’s AI tools will hurt book sales and otherwise cause the authors to lose money,” not whether Meta’s torrenting of books was illegal.

The torrenting “seems kind of messed up,” Chhabria said, but “the question, as the courts tell us over and over again, is not whether something is messed up but whether it’s copyright infringement.”

It’s possible that Chhabria dodged the question for procedural reasons. In a court filing, Meta argued that authors had moved for summary judgment on Meta’s alleged copying of their works, not on “unsubstantiated allegations that Meta distributed Plaintiffs’ works via torrent.”

In the court filing, Meta alleged that even if Chhabria agreed that the authors’ request for “summary judgment is warranted on the basis of Meta’s distribution, as well as Meta’s copying,” that the authors “lack evidence to show that Meta distributed any of their works.”

According to Meta, authors abandoned any claims that Meta’s seeding of the torrented files served to distribute works, leaving only claims about Meta’s leeching. Meta argued that the authors “admittedly lack evidence that Meta ever uploaded any of their works, or any identifiable part of those works, during the so-called ‘leeching’ phase,” relying instead on expert estimates based on how torrenting works.

It’s also possible that for Chhabria, the torrenting question seemed like an unnecessary distraction. Former Meta attorney Mark Lumley, who quit the case earlier this year, told Vanity Fair that the torrenting was “one of those things that sounds bad but actually shouldn’t matter at all in the law. Fair use is always about uses the plaintiff doesn’t approve of; that’s why there is a lawsuit.”

Lumley suggested that court cases mulling fair use at this current moment should focus on the outputs, rather than the training. Citing the ruling in a case where Google Books scanning books to share excerpts was deemed fair use, Lumley argued that “all search engines crawl the full Internet, including plenty of pirated content,” so there’s seemingly no reason to stop AI crawling.

But the Copyright Alliance, a nonprofit, non-partisan group supporting the authors in the case, in a court filing alleged that Meta, in its bid to get AI products viewed as transformative, is aiming to do the opposite. “When describing the purpose of generative AI,” Meta allegedly strives to convince the court to “isolate the ‘training’ process and ignore the output of generative AI,” because that’s seemingly the only way that Meta can convince the court that AI outputs serve “a manifestly different purpose from Plaintiffs’ books,” the Copyright Alliance argued.

“Meta’s motion ignores what comes after the initial ‘training’—most notably the generation of output that serves the same purpose of the ingested works,” the Copyright Alliance argued. And the torrenting question should matter, the group argued, because unlike in Google Books, Meta’s AI models are apparently training on pirated works, not “legitimate copies of books.”

Chhabria will not be making a snap decision in the case, planning to take his time and likely stressing not just Meta, but every AI company defending training as fair use the longer he delays. Understanding that the entire AI industry potentially has a stake in the ruling, Chhabria apparently sought to relieve some tension at the end of the hearing with a joke, Wired reported.

 “I will issue a ruling later today,” Chhabria said. “Just kidding! I will take a lot longer to think about it.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Judge on Meta’s AI training: “I just don’t understand how that can be fair use” Read More »

new-study-accuses-lm-arena-of-gaming-its-popular-ai-benchmark

New study accuses LM Arena of gaming its popular AI benchmark

This study also calls out LM Arena for what appears to be much greater promotion of private models like Gemini, ChatGPT, and Claude. Developers collect data on model interactions from the Chatbot Arena API, but teams focusing on open models consistently get the short end of the stick.

The researchers point out that certain models appear in arena faceoffs much more often, with Google and OpenAI together accounting for over 34 percent of collected model data. Firms like xAI, Meta, and Amazon are also disproportionately represented in the arena. Therefore, those firms get more vibemarking data compared to the makers of open models.

More models, more evals

The study authors have a list of suggestions to make LM Arena more fair. Several of the paper’s recommendations are aimed at correcting the imbalance of privately tested commercial models, for example, by limiting the number of models a group can add and retract before releasing one. The study also suggests showing all model results, even if they aren’t final.

However, the site’s operators take issue with some of the paper’s methodology and conclusions. LM Arena points out that the pre-release testing features have not been kept secret, with a March 2024 blog post featuring a brief explanation of the system. They also contend that model creators don’t technically choose the version that is shown. Instead, the site simply doesn’t show non-public versions for simplicity’s sake. When a developer releases the final version, that’s what LM Arena adds to the leaderboard.

Proprietary models get disproportionate attention in the Chatbot Arena, the study says.

Credit: Shivalika Singh et al.

Proprietary models get disproportionate attention in the Chatbot Arena, the study says. Credit: Shivalika Singh et al.

One place the two sides may find alignment is on the question of unequal matchups. The study authors call for fair sampling, which will ensure open models appear in Chatbot Arena at a rate similar to the likes of Gemini and ChatGPT. LM Arena has suggested it will work to make the sampling algorithm more varied so you don’t always get the big commercial models. That would send more eval data to small players, giving them the chance to improve and challenge the big commercial models.

LM Arena recently announced it was forming a corporate entity to continue its work. With money on the table, the operators need to ensure Chatbot Arena continues figuring into the development of popular models. However, it’s unclear whether this is an objectively better way to evaluate chatbots versus academic tests. As people vote on vibes, there’s a real possibility we are pushing models to adopt sycophantic tendencies. This may have helped nudge ChatGPT into suck-up territory in recent weeks, a move that OpenAI has hastily reverted after widespread anger.

New study accuses LM Arena of gaming its popular AI benchmark Read More »

at-monopoly-trial,-zuckerberg-redefined-social-media-as-texting-with-friends

At monopoly trial, Zuckerberg redefined social media as texting with friends


“The magic of friends has fallen away”

Mark Zuckerberg played up TikTok rivalry at monopoly trial, but judge may not buy it.

The Meta monopoly trial has raised a question that Meta hopes the Federal Trade Commission (FTC) can’t effectively answer: How important is it to use social media to connect with friends and family today?

Connecting with friends was, of course, Facebook’s primary use case as it became the rare social network to hit 1 billion users—not by being acquired by a Big Tech company but based on the strength of its clean interface and the network effects that kept users locked in simply because all the important people in their life chose to be there.

According to the FTC, Meta took advantage of Facebook’s early popularity, and it has since bought out rivals and otherwise cornered the market on personal social networks. Only Snapchat and MeWe (a privacy-focused Facebook alternative) are competitors to Meta platforms, the FTC argues, and social networks like TikTok or YouTube aren’t interchangeable, because those aren’t destinations focused on connecting friends and family.

For Meta CEO Mark Zuckerberg, however, those early days of Facebook bringing old friends back together are apparently over. He took the stand this week to testify that the FTC’s market definition ignores the reality that Meta contends with today, where “the amount that people are sharing with friends on Facebook, especially, has been declining,” CNN reported.

“Even the amount of new friends that people add … I think has been declining,” Zuckerberg said, although he did not indicate how steep the decline is. “I don’t know the exact numbers,” Zuckerberg admitted. Meta’s former chief operating officer, Sheryl Sandberg, also took the stand and reportedly testified that while she was at Meta, “friends and family sharing went way down over time . . . If you have a strategy of targeting friends and family, you’d have serious revenue issues.”

In particular, TikTok’s explosive popularity has shifted the dynamics of social media today, Zuckerberg suggested. For many users, “apps now serve primarily as discovery engines,” Zuckerberg testified, and social interactions increasingly come from sharing fun creator content in private messages, rather than through engaging with a friend or family member’s posts.

That’s why Meta added Reels, Zuckerberg testified, and, more recently, TikTok Shop-like functionality. To stay relevant, Meta had to make its platforms more like TikTok, investing heavily in its discovery algorithm, and even willing to irk loyal Instagram users by turning their perfectly curated square grids into rectangles, Wired noted in a piece probing Meta’s efforts to lure TikTok users to Instagram.

There was seemingly no bridge too far, because Zuckerberg said, “TikTok is still bigger than either Facebook or Instagram, and I don’t like it when our competitors do better than us.” And since Meta has no interest in buying TikTok, due to fears of basing business in China, Big Tech on Trial reported, Meta’s only choice was to TikTok-ify its apps to avoid a mass exodus after Facebook users started declining for the first time in 2022. Committing to this future, the next year, Meta doubled the amount of force-fed filler in Instagram feeds.

Right now, Meta is positioning TikTok as one of Meta’s biggest competitors, with Meta supposedly flagging it a “top priority” and “highly urgent” competitive threat as early as 2018, Zuckerberg said. Further, Zuckerberg testified that while TikTok’s popularity grew, Meta’s “growth slowed down dramatically,” TechCrunch reported. And perhaps most persuasively, when TikTok briefly went dark earlier this year, some TikTokers moved to Instagram, Meta argued, suggesting that some users consider the platforms interchangeable.

If Meta can convince the court that the FTC’s market definition is wrong and that TikTok is Meta’s biggest rival, then Meta’s market share drops below monopolist standards, “undercutting” the FTC’s case, Big Tech on Trial reported.

But are Facebook and Instagram substitutes for TikTok?

Although Meta paints the picture that TikTok users naturally gravitated to Instagram during the TikTok outage, it’s clear that Meta advertised heavily to move them in that direction. There was even a conspiracy theory that Meta had bought TikTok in the hours before TikTok went down, Wired reported, as users noticed Meta banners encouraging them to link their TikTok accounts to Meta platforms. However, even the reported Meta ad blitz seemingly didn’t sway that many TikTok users, as Sensor Tower data at the time apparently indicated that “Instagram and Facebook appeared to receive only a modest increase in daily active users and downloads” during the TikTok outage, Wired reported.

Perhaps a more interesting question that the court may entertain is not where TikTok users go when TikTok is down, but where Instagram or Facebook users turn if they no longer want to use those platforms. If the FTC can argue that people seeking a destination to connect with friends or family wouldn’t substitute TikTok for that purpose, their market definition might fly.

Kenneth Dintzer, a partner at Crowell & Moring and the former lead attorney in the DOJ’s winning Google search monopoly case, told Ars that the chief judge in the case, James Boasberg, made clear at summary judgment that acknowledging Meta’s rivalry with TikTok “doesn’t really answer the question about friends and family.”

So even though Zuckerberg was “pretty persuasive,” his testimony on TikTok may not move the judge much. However, there was one exchange at the trial where Boasberg asked, “How much does it matter if friends are on a particular platform, if friends can share outside of it?” Zuckerberg praised this as a “good question” and “explained that it doesn’t matter much because people can fluidly share across platforms, using each one for its value as a ‘discovery engine,'” Big Tech on Trial reported.

Dintzer noted that Zuckerberg seemed to attempt to float a different theory explaining why TikTok was a valid rival—curiously attempting to redefine “social media” to overcome the judge’s skepticism in considering TikTok a true Meta rival.

Zuckerberg’s theory, Dintzer said, suggests that “if I open up something on TikTok or on YouTube, and I send it to a friend, that is social media.”

But that broad definition could be problematic, since it would suggest that all texting and messaging are social media, Dintzer said.

“That didn’t seem particularly persuasive,” Dintzer said. Although that kind of social sharing is “certainly something that people enjoy,” it still “doesn’t seem to be quite the same thing as posting something on Facebook for your friends and family.”

Another wrinkle that may scramble Meta’s defense is that Meta has publicly declared that its priority is to bring back “OG Facebook” and refresh how friends and family connect on its platforms. Just today, Instagram chief Adam Mosseri announced a new Instagram feature called “blend” that strives to connect friends and family through sharing access to their unique discovery algorithms.

Those initiatives seem like a strategy that fully relies on Meta’s core use case of connecting friends and family (and network effects that Zuckerberg downplayed) to propel engagement that could spike revenue. However, that goal could invite scrutiny, perhaps signaling to the court that Meta still benefits from the alleged monopoly in personal social networking and will only continue locking in users seeking to connect with friends and family.

“The magic of friends has fallen away,” Meta’s blog said, which, despite seeming at odds, could serve as both a tagline for its new “Friends” tab on Facebook and the headline of its defense so far in the monopoly trial.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

At monopoly trial, Zuckerberg redefined social media as texting with friends Read More »

zuckerberg’s-2012-email-dubbed-“smoking-gun”-at-meta-monopoly-trial

Zuckerberg’s 2012 email dubbed “smoking gun” at Meta monopoly trial


FTC’s “entire” monopoly case rests on decade-old emails, Meta argued.

Starting the Federal Trade Commission (FTC) antitrust trial Monday with a bang, Daniel Matheson, the FTC’s lead litigator, flagged a “smoking gun”—a 2012 email where Mark Zuckerberg suggested that Facebook could buy Instagram to “neutralize a potential competitor,” The New York Times reported.

And in “another banger of an email from Zuckerberg,” Brendan Benedict, an antitrust expert monitoring the trial for Big Tech on Trial, posted on X that the Meta CEO wrote, “Messenger isn’t beating WhatsApp. Instagram was growing so much faster than us that we had to buy them for $1 billion… that’s not exactly killing it.”

These messages and others, the FTC hopes to convince the court, provide evidence that Zuckerberg runs Meta by the mantra “it’s better to buy than compete”—seemingly for more than a decade intent on growing the Facebook empire by killing off rivals, allegedly in violation of antitrust law. Another message from Zuckerberg exhibited at trial, Benedict noted on X, suggests Facebook tried to buy yet another rival, Snapchat, for $6 billion.

“We should probably prepare for a leak that we offered $6b… and all the negative [attention] that will come from that,” the Zuckerberg message said.

At the trial, Matheson suggested that “Meta broke the deal” that firms have in the US to compete to succeed, allegedly deciding “that competition was too hard, and it would be easier to buy out their rivals than to compete with them,” the NYT reported. Ultimately, it will be up to the FTC to prove that Meta couldn’t have achieved its dominance today without buying Instagram and WhatsApp (in 2012 and 2014, respectively), while legal experts told the NYT that it is “extremely rare” to unwind mergers approved so many years ago.

Later today, Zuckerberg will take the stand and testify for perhaps seven hours, likely being made to answer for these messages and more. According to the NYT, the FTC will present a paper trail of emails where Zuckerberg and other Meta executives make it clear that acquisitions were intended to remove threats to Facebook’s dominance in the market.

It’s apparent that Meta plans to argue that it doesn’t matter what Zuckerberg or other executives intended when pursuing acquisitions. In a pretrial brief, Meta argued that “the FTC’s case rests almost entirely on emails (many more than a decade old) allegedly expressing competitive concerns” but suggested that this is only “intent” evidence, “without any evidence of anticompetitive effects.”

FTC may force Meta to spin off Instagram, WhatsApp

It is the FTC’s burden to show that Meta’s acquisitions harmed consumers and the market (and those harms outweigh any believable pro-competitive benefits alleged by Meta), but it remains to be seen whether Meta will devote ample time to testifying that “Mark Zuckerberg got it wrong” when describing his rationale for acquisitions, Big Tech on Trial noted.

Meta’s lead lawyer, Mark Hansen, told Law360 that “what people thought at Meta is not really what this case is.” (For those keeping track of who’s who in this case, Hansen apparently once was the boss of James Boasberg, the judge in the case, Big Tech on Trial reported.)

The social media company hopes to convince the court that the FTC’s case is political. So far, Meta has accused the FTC of shifting its market definition while willfully overlooking today’s competitive realities online, simply to punish a tech giant for its success.

In a blog post on Sunday, Meta’s chief legal officer, Jennifer Newstead, accused the FTC of lobbing a “weak case” that “ignores reality.” Meta insists that the FTC has “gerrymandered a fictitious market” to exclude Meta’s actual rivals, like TikTok, X, YouTube, or LinkedIn.

Boasberg will be scrutinizing the market definition, as well as alleged harms, and the FTC will potentially struggle to win him over on the merits of their case. Big Tech on Trial—which suggested that Meta’s acquisitions, if intended to kill off rivals, would be considered “a textbook violation of the antitrust laws”—noted that the court previously told the FTC that the agency had an “uphill climb” in proving its market definition. And because Meta’s social platforms are free, it’s harder to show direct evidence of consumer harms, experts have noted.

Still, for Meta, the stakes are high, as the FTC could pursue a breakup of the company, including requiring Meta to spin off WhatsApp and Instagram. Losing Instagram would hit Meta’s revenue hard, as Instagram is supposed to bring in more than half of its US ad revenue in 2025, eMarketer forecasted last December.

The trial is expected to last eight weeks, but much of the most-anticipated testimony will come early. Facebook’s former chief operating officer, Sheryl Sandberg, as well as Kevin Systrom, co-founder of Instagram, are expected to testify this week.

All unsealed emails and exhibits will eventually be posted on a website jointly managed by the FTC and Meta, but Ars was not yet provided a link or timeline for when the public evidence will be posted online.

Meta mocks FTC’s “ad load theory”

The FTC is arguing that Meta overpaid to acquire Instagram and WhatsApp to maintain an alleged monopoly in the personal social networking market that includes rivals like Snapchat and MeWe, a social networking platform that brands itself as a privacy-focused Facebook alternative.

In opening arguments, the FTC alleged that once competition was eliminated, Meta then degraded the quality of its platforms by limiting user privacy and inundating users with ads.

Meta has defended its acquisitions by arguing that it has improved Instagram and WhatsApp. At trial, Meta’s lawyer Hansen made light of the FTC’s “ad load theory,” stirring laughter in the reportedly packed courtroom, Benedict posted on X.

“If you don’t like an ad, you scroll past it. It takes about a second,” Hansen said.

Meanwhile, Newstead, who reportedly attended opening arguments, argued in her blog that “Instagram and WhatsApp provide a model for what successful acquisitions can achieve: Meta has made Instagram and WhatsApp better, more reliable and more secure through billions of dollars and millions of hours of investment.”

By breaking up these acquisitions, Hansen argued, the FTC would be sending a strong message to startups that “would kill entrepreneurship” by seemingly taking mergers and acquisitions “off the table,” Benedict posted on X.

To defeat the FTC, Meta will likely attempt to broaden the market definition to include more rivals. In support of that, Meta has already pointed to the recent TikTok ban driving TikTok users to Instagram, which allegedly shows the platforms are interchangeable, despite the FTC differentiating TikTok as a video app.

The FTC will likely lean on Meta’s internal documents to show who Meta actually considers rivals. During opening arguments, for example, the FTC reportedly shared a Meta document showing that Meta itself has agreed with the FTC and differentiated Facebook as connecting “friends and family,” while “LinkedIn connects coworkers” and “Nextdoor connects neighbors.”

“Contemporaneous records reveal that Meta and other social media executives understood that users flock to different platforms for different purposes and that Facebook, Instagram, and WhatsApp were specifically designed to operate in a distinct submarket for family and friend connections,” the American Economic Liberties Project, which is partnering with Big Tech on Trial to monitoring the proceedings, said in a press statement.

But Newstead suggested that “evidence of fierce and increasing competition in the market has only grown in the four years since the FTC’s complaint was filed,” and Meta now “faces strong competition in a rapidly shifting tech landscape that includes American and foreign competitors.”

To emphasize the threats to US consumers and businesses, Newstead also invoked the supposed threat to America’s AI leadership if one of the country’s leading tech companies loses momentum at this key moment.

“It’s absurd that the FTC is trying to break up a great American company at the same time the Administration is trying to save Chinese-owned TikTok,” Newstead said. “And, it makes no sense for regulators to try and weaken US companies right at the moment we most need them to invest in winning the competition with China for leadership in AI.”

Trump’s FTC appears unlikely to back down

Zuckerberg has been criticized for his supposed last-ditch attempts to push the Trump administration to pause or toss the FTC’s case. Last month, the CEO visited Trump in the Oval Office to discuss a settlement, Politico reported, apparently worrying officials who don’t want Trump to bail out Meta.

On Monday, the FTC did not appear to be wavering, however, prompting alarm bells in the tech industry.

Patrick Hedger, the director of policy for NetChoice—a trade group that represents Meta and other Big Tech companies—warned that if the FTC undoes Meta’s acquisitions, it would harm innovation and competition while damaging trust in the FTC long-term.

“This bait-and-switch against Meta for acquisitions approved over 10 years ago in the fiercely competitive social media marketplace will have serious ripple effects not only for the US tech industry, but across all American businesses,” Hedger said.

Seemingly accusing Donald Trump’s FTC of pursuing Lina Khan’s alleged agenda against Big Tech, Hedger added that “with Meta at the forefront of open-source AI innovation and a global competitor, the outcome of this trial will have spillover into the entire economy. It will create a fear among businesses that making future, pro-competitive investments could be reversed due to political discontent—not the necessary evidence traditionally required for an anticompetitive claim.”

Big Tech on Trial noted that it’s possible that the FTC could “vote to settle, withdraw, or pause the case.” Last month, Trump fired the two Democrats, eliminating a 3–2 split and ensuring only Republicans are steering the agency for now.

But Trump’s FTC seems determined to proceed in attempts to disrupt Meta’s business. FTC Chair Andrew Ferguson told Fox Business Monday that “antitrust laws can help make sure that no private sector company gets so powerful that it affects our lives in ways that are really bad for all Americans,” and “that’s what this trial beginning today is all about.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Zuckerberg’s 2012 email dubbed “smoking gun” at Meta monopoly trial Read More »

meta’s-surprise-llama-4-drop-exposes-the-gap-between-ai-ambition-and-reality

Meta’s surprise Llama 4 drop exposes the gap between AI ambition and reality

Meta constructed the Llama 4 models using a mixture-of-experts (MoE) architecture, which is one way around the limitations of running huge AI models. Think of MoE like having a large team of specialized workers; instead of everyone working on every task, only the relevant specialists activate for a specific job.

For example, Llama 4 Maverick features a 400 billion parameter size, but only 17 billion of those parameters are active at once across one of 128 experts. Likewise, Scout features 109 billion total parameters, but only 17 billion are active at once across one of 16 experts. This design can reduce the computation needed to run the model, since smaller portions of neural network weights are active simultaneously.

Llama’s reality check arrives quickly

Current AI models have a relatively limited short-term memory. In AI, a context window acts somewhat in that fashion, determining how much information it can process simultaneously. AI language models like Llama typically process that memory as chunks of data called tokens, which can be whole words or fragments of longer words. Large context windows allow AI models to process longer documents, larger code bases, and longer conversations.

Despite Meta’s promotion of Llama 4 Scout’s 10 million token context window, developers have so far discovered that using even a fraction of that amount has proven challenging due to memory limitations. Willison reported on his blog that third-party services providing access, like Groq and Fireworks, limited Scout’s context to just 128,000 tokens. Another provider, Together AI, offered 328,000 tokens.

Evidence suggests accessing larger contexts requires immense resources. Willison pointed to Meta’s own example notebook (“build_with_llama_4“), which states that running a 1.4 million token context needs eight high-end Nvidia H100 GPUs.

Willison documented his own testing troubles. When he asked Llama 4 Scout via the OpenRouter service to summarize a long online discussion (around 20,000 tokens), the result wasn’t useful. He described the output as “complete junk output,” which devolved into repetitive loops.

Meta’s surprise Llama 4 drop exposes the gap between AI ambition and reality Read More »

meta-plans-to-test-and-tinker-with-x’s-community-notes-algorithm

Meta plans to test and tinker with X’s community notes algorithm

Meta also confirmed that it won’t be reducing visibility of misleading posts with community notes. That’s a change from the prior system, Meta noted, which had penalties associated with fact-checking.

According to Meta, X’s algorithm cannot be gamed, supposedly safeguarding “against organized campaigns” striving to manipulate notes and “influence what notes get published or what they say.” Meta claims it will rely on external research on community notes to avoid that pitfall, but as recently as last October, outside researchers had suggested that X’s Community Notes were easily sabotaged by toxic X users.

“We don’t expect this process to be perfect, but we’ll continue to improve as we learn,” Meta said.

Meta confirmed that the company plans to tweak X’s algorithm over time to develop its own version of community notes, which “may explore different or adjusted algorithms to support how Community Notes are ranked and rated.”

In a post, X’s Support account said that X was “excited” that Meta was using its “well-established, academically studied program as a foundation” for its community notes.

Meta plans to test and tinker with X’s community notes algorithm Read More »

ai-firms-follow-deepseek’s-lead,-create-cheaper-models-with-“distillation”

AI firms follow DeepSeek’s lead, create cheaper models with “distillation”

Thanks to distillation, developers and businesses can access these models’ capabilities at a fraction of the price, allowing app developers to run AI models quickly on devices such as laptops and smartphones.

Developers can use OpenAI’s platform for distillation, learning from the large language models that underpin products like ChatGPT. OpenAI’s largest backer, Microsoft, used GPT-4 to distill its small language family of models Phi as part of a commercial partnership after investing nearly $14 billion into the company.

However, the San Francisco-based start-up has said it believes DeepSeek distilled OpenAI’s models to train its competitor, a move that would be against its terms of service. DeepSeek has not commented on the claims.

While distillation can be used to create high-performing models, experts add they are more limited.

“Distillation presents an interesting trade-off; if you make the models smaller, you inevitably reduce their capability,” said Ahmed Awadallah of Microsoft Research, who said a distilled model can be designed to be very good at summarising emails, for example, “but it really would not be good at anything else.”

David Cox, vice-president for AI models at IBM Research, said most businesses do not need a massive model to run their products, and distilled ones are powerful enough for purposes such as customer service chatbots or running on smaller devices like phones.

“Any time you can [make it less expensive] and it gives you the right performance you want, there is very little reason not to do it,” he added.

That presents a challenge to many of the business models of leading AI firms. Even if developers use distilled models from companies like OpenAI, they cost far less to run, are less expensive to create, and, therefore, generate less revenue. Model-makers like OpenAI often charge less for the use of distilled models as they require less computational load.

AI firms follow DeepSeek’s lead, create cheaper models with “distillation” Read More »

meta-claims-torrenting-pirated-books-isn’t-illegal-without-proof-of-seeding

Meta claims torrenting pirated books isn’t illegal without proof of seeding

Just because Meta admitted to torrenting a dataset of pirated books for AI training purposes, that doesn’t necessarily mean that Meta seeded the file after downloading it, the social media company claimed in a court filing this week.

Evidence instead shows that Meta “took precautions not to ‘seed’ any downloaded files,” Meta’s filing said. Seeding refers to sharing a torrented file after the download completes, and because there’s allegedly no proof of such “seeding,” Meta insisted that authors cannot prove Meta shared the pirated books with anyone during the torrenting process.

Whether or not Meta actually seeded the pirated books could make a difference in a copyright lawsuit from book authors including Richard Kadrey, Sarah Silverman, and Ta-Nehisi Coates. Authors had previously alleged that Meta unlawfully copied and distributed their works through AI outputs—an increasingly common complaint that so far has barely been litigated. But Meta’s admission to torrenting appears to add a more straightforward claim of unlawful distribution of copyrighted works through illegal torrenting, which has long been considered established case-law.

Authors have alleged that “Meta deliberately engaged in one of the largest data piracy campaigns in history to acquire text data for its LLM training datasets, torrenting and sharing dozens of terabytes of pirated data that altogether contain many millions of copyrighted works.” Separate from their copyright infringement claims opposing Meta’s AI training on pirated copies of their books, authors alleged that Meta torrenting the dataset was “independently illegal” under California’s Computer Data Access and Fraud Act (CDAFA), which allegedly “prevents the unauthorized taking of data, including copyrighted works.”

Meta, however, is hoping to convince the court that torrenting is not in and of itself illegal, but is, rather, a “widely-used protocol to download large files.” According to Meta, the decision to download the pirated books dataset from pirate libraries like LibGen and Z-Library was simply a move to access “data from a ‘well-known online repository’ that was publicly available via torrents.”

Meta claims torrenting pirated books isn’t illegal without proof of seeding Read More »

arm-to-start-making-server-cpus-in-house

Arm to start making server CPUs in-house

Cambridge-headquartered Arm has more than doubled in value to $160 billion since it listed on Nasdaq in 2023, carried higher by explosive investor interest in AI. Arm’s partnerships with Nvidia and Amazon have driven its rapid growth in the data centers that power AI assistants from OpenAI, Meta, and Anthropic.

Meta is the latest big tech company to turn to Arm for server chips, displacing those traditionally provided by Intel and AMD.

During last month’s earnings call, Meta’s finance chief Susan Li said it would be “extending our custom silicon efforts to [AI] training workloads” to drive greater efficiency and performance by tuning its chips to its particular computing needs.

Meanwhile, an Arm-produced chip is also likely to eventually play a role in Sir Jony Ive’s secretive plans to build a new kind of AI-powered personal device, which is a collaboration between the iPhone designer’s firm LoveFrom, OpenAI’s Sam Altman, and SoftBank.

Arm’s designs have been used in more than 300 billion chips, including almost all of the world’s smartphones. Its power-efficient designs have made its CPUs, the general-purpose workhorse that sits at the heart of any computer, an increasingly attractive alternative to Intel’s chips in PCs and servers at a time when AI is making data centers much more energy-intensive.

Arm, which started out in a converted turkey barn in Cambridgeshire 35 years ago, became ubiquitous in the mobile market by licensing its designs to Apple for its iPhone chips, as well as Android suppliers such as Qualcomm and MediaTek. Maintaining its unique position in the center of the fiercely competitive mobile market has required a careful balancing act for Arm.

But Son has long pushed for Arm to make more money from its intellectual property. Under Haas, who became chief executive in 2022, Arm’s business model began to evolve, with a focus on driving higher royalties from customers as the company designs more of the building blocks needed to make a chip.

Going a step further by building and selling its own complete chip is a bold move by Haas that risks putting it on a collision course with customers such as Qualcomm, which is already locked in a legal battle with Arm over licensing terms, and Nvidia, the world’s most valuable chipmaker.

Arm, SoftBank, and Meta declined to comment.

Additional reporting by Hannah Murphy.

© 2025 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

Arm to start making server CPUs in-house Read More »

”torrenting-from-a-corporate-laptop-doesn’t-feel-right”:-meta-emails-unsealed

”Torrenting from a corporate laptop doesn’t feel right”: Meta emails unsealed

Emails discussing torrenting prove that Meta knew it was “illegal,” authors alleged. And Bashlykov’s warnings seemingly landed on deaf ears, with authors alleging that evidence showed Meta chose to instead hide its torrenting as best it could while downloading and seeding terabytes of data from multiple shadow libraries as recently as April 2024.

Meta allegedly concealed seeding

Supposedly, Meta tried to conceal the seeding by not using Facebook servers while downloading the dataset to “avoid” the “risk” of anyone “tracing back the seeder/downloader” from Facebook servers, an internal message from Meta researcher Frank Zhang said, while describing the work as in “stealth mode.” Meta also allegedly modified settings “so that the smallest amount of seeding possible could occur,” a Meta executive in charge of project management, Michael Clark, said in a deposition.

Now that new information has come to light, authors claim that Meta staff involved in the decision to torrent LibGen must be deposed again, because allegedly the new facts “contradict prior deposition testimony.”

Mark Zuckerberg, for example, claimed to have no involvement in decisions to use LibGen to train AI models. But unredacted messages show the “decision to use LibGen occurred” after “a prior escalation to MZ,” authors alleged.

Meta did not immediately respond to Ars’ request for comment and has maintained throughout the litigation that AI training on LibGen was “fair use.”

However, Meta has previously addressed its torrenting in a motion to dismiss filed last month, telling the court that “plaintiffs do not plead a single instance in which any part of any book was, in fact, downloaded by a third party from Meta via torrent, much less that Plaintiffs’ books were somehow distributed by Meta.”

While Meta may be confident in its legal strategy despite the new torrenting wrinkle, the social media company has seemingly complicated its case by allowing authors to expand the distribution theory that’s key to winning a direct copyright infringement claim beyond just claiming that Meta’s AI outputs unlawfully distributed their works.

As limited discovery on Meta’s seeding now proceeds, Meta is not fighting the seeding aspect of the direct copyright infringement claim at this time, telling the court that it plans to “set… the record straight and debunk… this meritless allegation on summary judgment.”

”Torrenting from a corporate laptop doesn’t feel right”: Meta emails unsealed Read More »

ai-haters-build-tarpits-to-trap-and-trick-ai-scrapers-that-ignore-robots.txt

AI haters build tarpits to trap and trick AI scrapers that ignore robots.txt


Making AI crawlers squirm

Attackers explain how an anti-spam defense became an AI weapon.

Last summer, Anthropic inspired backlash when its ClaudeBot AI crawler was accused of hammering websites a million or more times a day.

And it wasn’t the only artificial intelligence company making headlines for supposedly ignoring instructions in robots.txt files to avoid scraping web content on certain sites. Around the same time, Reddit’s CEO called out all AI companies whose crawlers he said were “a pain in the ass to block,” despite the tech industry otherwise agreeing to respect “no scraping” robots.txt rules.

Watching the controversy unfold was a software developer whom Ars has granted anonymity to discuss his development of malware (we’ll call him Aaron). Shortly after he noticed Facebook’s crawler exceeding 30 million hits on his site, Aaron began plotting a new kind of attack on crawlers “clobbering” websites that he told Ars he hoped would give “teeth” to robots.txt.

Building on an anti-spam cybersecurity tactic known as tarpitting, he created Nepenthes, malicious software named after a carnivorous plant that will “eat just about anything that finds its way inside.”

Aaron clearly warns users that Nepenthes is aggressive malware. It’s not to be deployed by site owners uncomfortable with trapping AI crawlers and sending them down an “infinite maze” of static files with no exit links, where they “get stuck” and “thrash around” for months, he tells users. Once trapped, the crawlers can be fed gibberish data, aka Markov babble, which is designed to poison AI models. That’s likely an appealing bonus feature for any site owners who, like Aaron, are fed up with paying for AI scraping and just want to watch AI burn.

Tarpits were originally designed to waste spammers’ time and resources, but creators like Aaron have now evolved the tactic into an anti-AI weapon. As of this writing, Aaron confirmed that Nepenthes can effectively trap all the major web crawlers. So far, only OpenAI’s crawler has managed to escape.

It’s unclear how much damage tarpits or other AI attacks can ultimately do. Last May, Laxmi Korada, Microsoft’s director of partner technology, published a report detailing how leading AI companies were coping with poisoning, one of the earliest AI defense tactics deployed. He noted that all companies have developed poisoning countermeasures, while OpenAI “has been quite vigilant” and excels at detecting the “first signs of data poisoning attempts.”

Despite these efforts, he concluded that data poisoning was “a serious threat to machine learning models.” And in 2025, tarpitting represents a new threat, potentially increasing the costs of fresh data at a moment when AI companies are heavily investing and competing to innovate quickly while rarely turning significant profits.

“A link to a Nepenthes location from your site will flood out valid URLs within your site’s domain name, making it unlikely the crawler will access real content,” a Nepenthes explainer reads.

The only AI company that responded to Ars’ request to comment was OpenAI, whose spokesperson confirmed that OpenAI is already working on a way to fight tarpitting.

“We’re aware of efforts to disrupt AI web crawlers,” OpenAI’s spokesperson said. “We design our systems to be resilient while respecting robots.txt and standard web practices.”

But to Aaron, the fight is not about winning. Instead, it’s about resisting the AI industry further decaying the Internet with tech that no one asked for, like chatbots that replace customer service agents or the rise of inaccurate AI search summaries. By releasing Nepenthes, he hopes to do as much damage as possible, perhaps spiking companies’ AI training costs, dragging out training efforts, or even accelerating model collapse, with tarpits helping to delay the next wave of enshittification.

“Ultimately, it’s like the Internet that I grew up on and loved is long gone,” Aaron told Ars. “I’m just fed up, and you know what? Let’s fight back, even if it’s not successful. Be indigestible. Grow spikes.”

Nepenthes instantly inspires another tarpit

Nepenthes was released in mid-January but was instantly popularized beyond Aaron’s expectations after tech journalist Cory Doctorow boosted a tech commentator, Jürgen Geuter, praising the novel AI attack method on Mastodon. Very quickly, Aaron was shocked to see engagement with Nepenthes skyrocket.

“That’s when I realized, ‘oh this is going to be something,'” Aaron told Ars. “I’m kind of shocked by how much it’s blown up.”

It’s hard to tell how widely Nepenthes has been deployed. Site owners are discouraged from flagging when the malware has been deployed, forcing crawlers to face unknown “consequences” if they ignore robots.txt instructions.

Aaron told Ars that while “a handful” of site owners have reached out and “most people are being quiet about it,” his web server logs indicate that people are already deploying the tool. Likely, site owners want to protect their content, deter scraping, or mess with AI companies.

When software developer and hacker Gergely Nagy, who goes by the handle “algernon” online, saw Nepenthes, he was delighted. At that time, Nagy told Ars that nearly all of his server’s bandwidth was being “eaten” by AI crawlers.

Already blocking scraping and attempting to poison AI models through a simpler method, Nagy took his defense method further and created his own tarpit, Iocaine. He told Ars the tarpit immediately killed off about 94 percent of bot traffic to his site, which was primarily from AI crawlers. Soon, social media discussion drove users to inquire about Iocaine deployment, including not just individuals but also organizations wanting to take stronger steps to block scraping.

Iocaine takes ideas (not code) from Nepenthes, but it’s more intent on using the tarpit to poison AI models. Nagy used a reverse proxy to trap crawlers in an “infinite maze of garbage” in an attempt to slowly poison their data collection as much as possible for daring to ignore robots.txt.

Taking its name from “one of the deadliest poisons known to man” from The Princess Bride, Iocaine is jokingly depicted as the “deadliest poison known to AI.” While there’s no way of validating that claim, Nagy’s motto is that the more poisoning attacks that are out there, “the merrier.” He told Ars that his primary reasons for building Iocaine were to help rights holders wall off valuable content and stop AI crawlers from crawling with abandon.

Tarpits aren’t perfect weapons against AI

Running malware like Nepenthes can burden servers, too. Aaron likened the cost of running Nepenthes to running a cheap virtual machine on a Raspberry Pi, and Nagy said that serving crawlers Iocaine costs about the same as serving his website.

But Aaron told Ars that Nepenthes wasting resources is the chief objection he’s seen preventing its deployment. Critics fear that deploying Nepenthes widely will not only burden their servers but also increase the costs of powering all that AI crawling for nothing.

“That seems to be what they’re worried about more than anything,” Aaron told Ars. “The amount of power that AI models require is already astronomical, and I’m making it worse. And my view of that is, OK, so if I do nothing, AI models, they boil the planet. If I switch this on, they boil the planet. How is that my fault?”

Aaron also defends against this criticism by suggesting that a broader impact could slow down AI investment enough to possibly curb some of that energy consumption. Perhaps due to the resistance, AI companies will be pushed to seek permission first to scrape or agree to pay more content creators for training on their data.

“Any time one of these crawlers pulls from my tarpit, it’s resources they’ve consumed and will have to pay hard cash for, but, being bullshit, the money [they] have spent to get it won’t be paid back by revenue,” Aaron posted, explaining his tactic online. “It effectively raises their costs. And seeing how none of them have turned a profit yet, that’s a big problem for them. The investor money will not continue forever without the investors getting paid.”

Nagy agrees that the more anti-AI attacks there are, the greater the potential is for them to have an impact. And by releasing Iocaine, Nagy showed that social media chatter about new attacks can inspire new tools within a few days. Marcus Butler, an independent software developer, similarly built his poisoning attack called Quixotic over a few days, he told Ars. Soon afterward, he received messages from others who built their own versions of his tool.

Butler is not in the camp of wanting to destroy AI. He told Ars that he doesn’t think “tools like Quixotic (or Nepenthes) will ‘burn AI to the ground.'” Instead, he takes a more measured stance, suggesting that “these tools provide a little protection (a very little protection) against scrapers taking content and, say, reposting it or using it for training purposes.”

But for a certain sect of Internet users, every little bit of protection seemingly helps. Geuter linked Ars to a list of tools bent on sabotaging AI. Ultimately, he expects that tools like Nepenthes are “probably not gonna be useful in the long run” because AI companies can likely detect and drop gibberish from training data. But Nepenthes represents a sea change, Geuter told Ars, providing a useful tool for people who “feel helpless” in the face of endless scraping and showing that “the story of there being no alternative or choice is false.”

Criticism of tarpits as AI weapons

Critics debating Nepenthes’ utility on Hacker News suggested that most AI crawlers could easily avoid tarpits like Nepenthes, with one commenter describing the attack as being “very crawler 101.” Aaron said that was his “favorite comment” because if tarpits are considered elementary attacks, he has “2 million lines of access log that show that Google didn’t graduate.”

But efforts to poison AI or waste AI resources don’t just mess with the tech industry. Governments globally are seeking to leverage AI to solve societal problems, and attacks on AI’s resilience seemingly threaten to disrupt that progress.

Nathan VanHoudnos is a senior AI security research scientist in the federally funded CERT Division of the Carnegie Mellon University Software Engineering Institute, which partners with academia, industry, law enforcement, and government to “improve the security and resilience of computer systems and networks.” He told Ars that new threats like tarpits seem to replicate a problem that AI companies are already well aware of: “that some of the stuff that you’re going to download from the Internet might not be good for you.”

“It sounds like these tarpit creators just mainly want to cause a little bit of trouble,” VanHoudnos said. “They want to make it a little harder for these folks to get” the “better or different” data “that they’re looking for.”

VanHoudnos co-authored a paper on “Counter AI” last August, pointing out that attackers like Aaron and Nagy are limited in how much they can mess with AI models. They may have “influence over what training data is collected but may not be able to control how the data are labeled, have access to the trained model, or have access to the Al system,” the paper said.

Further, AI companies are increasingly turning to the deep web for unique data, so any efforts to wall off valuable content with tarpits may be coming right when crawling on the surface web starts to slow, VanHoudnos suggested.

But according to VanHoudnos, AI crawlers are also “relatively cheap,” and companies may deprioritize fighting against new attacks on crawlers if “there are higher-priority assets” under attack. And tarpitting “does need to be taken seriously because it is a tool in a toolkit throughout the whole life cycle of these systems. There is no silver bullet, but this is an interesting tool in a toolkit,” he said.

Offering a choice to abstain from AI training

Aaron told Ars that he never intended Nepenthes to be a major project but that he occasionally puts in work to fix bugs or add new features. He said he’d consider working on integrations for real-time reactions to crawlers if there was enough demand.

Currently, Aaron predicts that Nepenthes might be most attractive to rights holders who want AI companies to pay to scrape their data. And many people seem enthusiastic about using it to reinforce robots.txt. But “some of the most exciting people are in the ‘let it burn’ category,” Aaron said. These people are drawn to tools like Nepenthes as an act of rebellion against AI making the Internet less useful and enjoyable for users.

Geuter told Ars that he considers Nepenthes “more of a sociopolitical statement than really a technological solution (because the problem it’s trying to address isn’t purely technical, it’s social, political, legal, and needs way bigger levers).”

To Geuter, a computer scientist who has been writing about the social, political, and structural impact of tech for two decades, AI is the “most aggressive” example of “technologies that are not done ‘for us’ but ‘to us.'”

“It feels a bit like the social contract that society and the tech sector/engineering have had (you build useful things, and we’re OK with you being well-off) has been canceled from one side,” Geuter said. “And that side now wants to have its toy eat the world. People feel threatened and want the threats to stop.”

As AI evolves, so do attacks, with one 2021 study showing that increasingly stronger data poisoning attacks, for example, were able to break data sanitization defenses. Whether these attacks can ever do meaningful destruction or not, Geuter sees tarpits as a “powerful symbol” of the resistance that Aaron and Nagy readily joined.

“It’s a great sign to see that people are challenging the notion that we all have to do AI now,” Geuter said. “Because we don’t. It’s a choice. A choice that mostly benefits monopolists.”

Tarpit creators like Nagy will likely be watching to see if poisoning attacks continue growing in sophistication. On the Iocaine site—which, yes, is protected from scraping by Iocaine—he posted this call to action: “Let’s make AI poisoning the norm. If we all do it, they won’t have anything to crawl.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

AI haters build tarpits to trap and trick AI scrapers that ignore robots.txt Read More »

reddit-won’t-interfere-with-users-revolting-against-x-with-subreddit-bans

Reddit won’t interfere with users revolting against X with subreddit bans

A Reddit spokesperson told Ars that decisions to ban or not ban X links are user-driven. Subreddit members are allowed to suggest and institute subreddit rules, they added.

“Notably, many Reddit communities also prohibit Reddit links,” the Reddit representative pointed out. They noted that Reddit as a company doesn’t currently have any ban on links to X.

A ban against links to an entire platform isn’t outside of the ordinary for Reddit. Numerous subreddits ban social media links, Reddit’s spokesperson said. r/EarthPorn, a subreddit for landscape photography, for example, doesn’t allow website links because all posts “must be static images,” per the subreddit’s official rules. r/AskReddit, meanwhile, only allows for questions asked in the title of a Reddit post and doesn’t allow for use of the text box, including for sharing links.

“Reddit has a longstanding commitment to freedom of speech and freedom of association,” Reddit’s spokesperson said. They added that any person is free to make or moderate their own community. Those unsatisfied with a forum about Seahawks football that doesn’t have X links could feel free to make their own subreddit. Although, some of the subreddits considering X bans, like r/MadeMeSmile, already have millions of followers.

Meta bans also under discussion

As 404 Media noted, some Redditors are also pushing to block content from Facebook, Instagram, and other Meta properties in response to new Donald Trump-friendly policies instituted by owner Mark Zuckerberg, like Meta killing diversity programs and axing third-party fact-checkers.

Reddit won’t interfere with users revolting against X with subreddit bans Read More »