Instagram

at-monopoly-trial,-zuckerberg-redefined-social-media-as-texting-with-friends

At monopoly trial, Zuckerberg redefined social media as texting with friends


“The magic of friends has fallen away”

Mark Zuckerberg played up TikTok rivalry at monopoly trial, but judge may not buy it.

The Meta monopoly trial has raised a question that Meta hopes the Federal Trade Commission (FTC) can’t effectively answer: How important is it to use social media to connect with friends and family today?

Connecting with friends was, of course, Facebook’s primary use case as it became the rare social network to hit 1 billion users—not by being acquired by a Big Tech company but based on the strength of its clean interface and the network effects that kept users locked in simply because all the important people in their life chose to be there.

According to the FTC, Meta took advantage of Facebook’s early popularity, and it has since bought out rivals and otherwise cornered the market on personal social networks. Only Snapchat and MeWe (a privacy-focused Facebook alternative) are competitors to Meta platforms, the FTC argues, and social networks like TikTok or YouTube aren’t interchangeable, because those aren’t destinations focused on connecting friends and family.

For Meta CEO Mark Zuckerberg, however, those early days of Facebook bringing old friends back together are apparently over. He took the stand this week to testify that the FTC’s market definition ignores the reality that Meta contends with today, where “the amount that people are sharing with friends on Facebook, especially, has been declining,” CNN reported.

“Even the amount of new friends that people add … I think has been declining,” Zuckerberg said, although he did not indicate how steep the decline is. “I don’t know the exact numbers,” Zuckerberg admitted. Meta’s former chief operating officer, Sheryl Sandberg, also took the stand and reportedly testified that while she was at Meta, “friends and family sharing went way down over time . . . If you have a strategy of targeting friends and family, you’d have serious revenue issues.”

In particular, TikTok’s explosive popularity has shifted the dynamics of social media today, Zuckerberg suggested. For many users, “apps now serve primarily as discovery engines,” Zuckerberg testified, and social interactions increasingly come from sharing fun creator content in private messages, rather than through engaging with a friend or family member’s posts.

That’s why Meta added Reels, Zuckerberg testified, and, more recently, TikTok Shop-like functionality. To stay relevant, Meta had to make its platforms more like TikTok, investing heavily in its discovery algorithm, and even willing to irk loyal Instagram users by turning their perfectly curated square grids into rectangles, Wired noted in a piece probing Meta’s efforts to lure TikTok users to Instagram.

There was seemingly no bridge too far, because Zuckerberg said, “TikTok is still bigger than either Facebook or Instagram, and I don’t like it when our competitors do better than us.” And since Meta has no interest in buying TikTok, due to fears of basing business in China, Big Tech on Trial reported, Meta’s only choice was to TikTok-ify its apps to avoid a mass exodus after Facebook users started declining for the first time in 2022. Committing to this future, the next year, Meta doubled the amount of force-fed filler in Instagram feeds.

Right now, Meta is positioning TikTok as one of Meta’s biggest competitors, with Meta supposedly flagging it a “top priority” and “highly urgent” competitive threat as early as 2018, Zuckerberg said. Further, Zuckerberg testified that while TikTok’s popularity grew, Meta’s “growth slowed down dramatically,” TechCrunch reported. And perhaps most persuasively, when TikTok briefly went dark earlier this year, some TikTokers moved to Instagram, Meta argued, suggesting that some users consider the platforms interchangeable.

If Meta can convince the court that the FTC’s market definition is wrong and that TikTok is Meta’s biggest rival, then Meta’s market share drops below monopolist standards, “undercutting” the FTC’s case, Big Tech on Trial reported.

But are Facebook and Instagram substitutes for TikTok?

Although Meta paints the picture that TikTok users naturally gravitated to Instagram during the TikTok outage, it’s clear that Meta advertised heavily to move them in that direction. There was even a conspiracy theory that Meta had bought TikTok in the hours before TikTok went down, Wired reported, as users noticed Meta banners encouraging them to link their TikTok accounts to Meta platforms. However, even the reported Meta ad blitz seemingly didn’t sway that many TikTok users, as Sensor Tower data at the time apparently indicated that “Instagram and Facebook appeared to receive only a modest increase in daily active users and downloads” during the TikTok outage, Wired reported.

Perhaps a more interesting question that the court may entertain is not where TikTok users go when TikTok is down, but where Instagram or Facebook users turn if they no longer want to use those platforms. If the FTC can argue that people seeking a destination to connect with friends or family wouldn’t substitute TikTok for that purpose, their market definition might fly.

Kenneth Dintzer, a partner at Crowell & Moring and the former lead attorney in the DOJ’s winning Google search monopoly case, told Ars that the chief judge in the case, James Boasberg, made clear at summary judgment that acknowledging Meta’s rivalry with TikTok “doesn’t really answer the question about friends and family.”

So even though Zuckerberg was “pretty persuasive,” his testimony on TikTok may not move the judge much. However, there was one exchange at the trial where Boasberg asked, “How much does it matter if friends are on a particular platform, if friends can share outside of it?” Zuckerberg praised this as a “good question” and “explained that it doesn’t matter much because people can fluidly share across platforms, using each one for its value as a ‘discovery engine,'” Big Tech on Trial reported.

Dintzer noted that Zuckerberg seemed to attempt to float a different theory explaining why TikTok was a valid rival—curiously attempting to redefine “social media” to overcome the judge’s skepticism in considering TikTok a true Meta rival.

Zuckerberg’s theory, Dintzer said, suggests that “if I open up something on TikTok or on YouTube, and I send it to a friend, that is social media.”

But that broad definition could be problematic, since it would suggest that all texting and messaging are social media, Dintzer said.

“That didn’t seem particularly persuasive,” Dintzer said. Although that kind of social sharing is “certainly something that people enjoy,” it still “doesn’t seem to be quite the same thing as posting something on Facebook for your friends and family.”

Another wrinkle that may scramble Meta’s defense is that Meta has publicly declared that its priority is to bring back “OG Facebook” and refresh how friends and family connect on its platforms. Just today, Instagram chief Adam Mosseri announced a new Instagram feature called “blend” that strives to connect friends and family through sharing access to their unique discovery algorithms.

Those initiatives seem like a strategy that fully relies on Meta’s core use case of connecting friends and family (and network effects that Zuckerberg downplayed) to propel engagement that could spike revenue. However, that goal could invite scrutiny, perhaps signaling to the court that Meta still benefits from the alleged monopoly in personal social networking and will only continue locking in users seeking to connect with friends and family.

“The magic of friends has fallen away,” Meta’s blog said, which, despite seeming at odds, could serve as both a tagline for its new “Friends” tab on Facebook and the headline of its defense so far in the monopoly trial.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

At monopoly trial, Zuckerberg redefined social media as texting with friends Read More »

zuckerberg’s-2012-email-dubbed-“smoking-gun”-at-meta-monopoly-trial

Zuckerberg’s 2012 email dubbed “smoking gun” at Meta monopoly trial


FTC’s “entire” monopoly case rests on decade-old emails, Meta argued.

Starting the Federal Trade Commission (FTC) antitrust trial Monday with a bang, Daniel Matheson, the FTC’s lead litigator, flagged a “smoking gun”—a 2012 email where Mark Zuckerberg suggested that Facebook could buy Instagram to “neutralize a potential competitor,” The New York Times reported.

And in “another banger of an email from Zuckerberg,” Brendan Benedict, an antitrust expert monitoring the trial for Big Tech on Trial, posted on X that the Meta CEO wrote, “Messenger isn’t beating WhatsApp. Instagram was growing so much faster than us that we had to buy them for $1 billion… that’s not exactly killing it.”

These messages and others, the FTC hopes to convince the court, provide evidence that Zuckerberg runs Meta by the mantra “it’s better to buy than compete”—seemingly for more than a decade intent on growing the Facebook empire by killing off rivals, allegedly in violation of antitrust law. Another message from Zuckerberg exhibited at trial, Benedict noted on X, suggests Facebook tried to buy yet another rival, Snapchat, for $6 billion.

“We should probably prepare for a leak that we offered $6b… and all the negative [attention] that will come from that,” the Zuckerberg message said.

At the trial, Matheson suggested that “Meta broke the deal” that firms have in the US to compete to succeed, allegedly deciding “that competition was too hard, and it would be easier to buy out their rivals than to compete with them,” the NYT reported. Ultimately, it will be up to the FTC to prove that Meta couldn’t have achieved its dominance today without buying Instagram and WhatsApp (in 2012 and 2014, respectively), while legal experts told the NYT that it is “extremely rare” to unwind mergers approved so many years ago.

Later today, Zuckerberg will take the stand and testify for perhaps seven hours, likely being made to answer for these messages and more. According to the NYT, the FTC will present a paper trail of emails where Zuckerberg and other Meta executives make it clear that acquisitions were intended to remove threats to Facebook’s dominance in the market.

It’s apparent that Meta plans to argue that it doesn’t matter what Zuckerberg or other executives intended when pursuing acquisitions. In a pretrial brief, Meta argued that “the FTC’s case rests almost entirely on emails (many more than a decade old) allegedly expressing competitive concerns” but suggested that this is only “intent” evidence, “without any evidence of anticompetitive effects.”

FTC may force Meta to spin off Instagram, WhatsApp

It is the FTC’s burden to show that Meta’s acquisitions harmed consumers and the market (and those harms outweigh any believable pro-competitive benefits alleged by Meta), but it remains to be seen whether Meta will devote ample time to testifying that “Mark Zuckerberg got it wrong” when describing his rationale for acquisitions, Big Tech on Trial noted.

Meta’s lead lawyer, Mark Hansen, told Law360 that “what people thought at Meta is not really what this case is.” (For those keeping track of who’s who in this case, Hansen apparently once was the boss of James Boasberg, the judge in the case, Big Tech on Trial reported.)

The social media company hopes to convince the court that the FTC’s case is political. So far, Meta has accused the FTC of shifting its market definition while willfully overlooking today’s competitive realities online, simply to punish a tech giant for its success.

In a blog post on Sunday, Meta’s chief legal officer, Jennifer Newstead, accused the FTC of lobbing a “weak case” that “ignores reality.” Meta insists that the FTC has “gerrymandered a fictitious market” to exclude Meta’s actual rivals, like TikTok, X, YouTube, or LinkedIn.

Boasberg will be scrutinizing the market definition, as well as alleged harms, and the FTC will potentially struggle to win him over on the merits of their case. Big Tech on Trial—which suggested that Meta’s acquisitions, if intended to kill off rivals, would be considered “a textbook violation of the antitrust laws”—noted that the court previously told the FTC that the agency had an “uphill climb” in proving its market definition. And because Meta’s social platforms are free, it’s harder to show direct evidence of consumer harms, experts have noted.

Still, for Meta, the stakes are high, as the FTC could pursue a breakup of the company, including requiring Meta to spin off WhatsApp and Instagram. Losing Instagram would hit Meta’s revenue hard, as Instagram is supposed to bring in more than half of its US ad revenue in 2025, eMarketer forecasted last December.

The trial is expected to last eight weeks, but much of the most-anticipated testimony will come early. Facebook’s former chief operating officer, Sheryl Sandberg, as well as Kevin Systrom, co-founder of Instagram, are expected to testify this week.

All unsealed emails and exhibits will eventually be posted on a website jointly managed by the FTC and Meta, but Ars was not yet provided a link or timeline for when the public evidence will be posted online.

Meta mocks FTC’s “ad load theory”

The FTC is arguing that Meta overpaid to acquire Instagram and WhatsApp to maintain an alleged monopoly in the personal social networking market that includes rivals like Snapchat and MeWe, a social networking platform that brands itself as a privacy-focused Facebook alternative.

In opening arguments, the FTC alleged that once competition was eliminated, Meta then degraded the quality of its platforms by limiting user privacy and inundating users with ads.

Meta has defended its acquisitions by arguing that it has improved Instagram and WhatsApp. At trial, Meta’s lawyer Hansen made light of the FTC’s “ad load theory,” stirring laughter in the reportedly packed courtroom, Benedict posted on X.

“If you don’t like an ad, you scroll past it. It takes about a second,” Hansen said.

Meanwhile, Newstead, who reportedly attended opening arguments, argued in her blog that “Instagram and WhatsApp provide a model for what successful acquisitions can achieve: Meta has made Instagram and WhatsApp better, more reliable and more secure through billions of dollars and millions of hours of investment.”

By breaking up these acquisitions, Hansen argued, the FTC would be sending a strong message to startups that “would kill entrepreneurship” by seemingly taking mergers and acquisitions “off the table,” Benedict posted on X.

To defeat the FTC, Meta will likely attempt to broaden the market definition to include more rivals. In support of that, Meta has already pointed to the recent TikTok ban driving TikTok users to Instagram, which allegedly shows the platforms are interchangeable, despite the FTC differentiating TikTok as a video app.

The FTC will likely lean on Meta’s internal documents to show who Meta actually considers rivals. During opening arguments, for example, the FTC reportedly shared a Meta document showing that Meta itself has agreed with the FTC and differentiated Facebook as connecting “friends and family,” while “LinkedIn connects coworkers” and “Nextdoor connects neighbors.”

“Contemporaneous records reveal that Meta and other social media executives understood that users flock to different platforms for different purposes and that Facebook, Instagram, and WhatsApp were specifically designed to operate in a distinct submarket for family and friend connections,” the American Economic Liberties Project, which is partnering with Big Tech on Trial to monitoring the proceedings, said in a press statement.

But Newstead suggested that “evidence of fierce and increasing competition in the market has only grown in the four years since the FTC’s complaint was filed,” and Meta now “faces strong competition in a rapidly shifting tech landscape that includes American and foreign competitors.”

To emphasize the threats to US consumers and businesses, Newstead also invoked the supposed threat to America’s AI leadership if one of the country’s leading tech companies loses momentum at this key moment.

“It’s absurd that the FTC is trying to break up a great American company at the same time the Administration is trying to save Chinese-owned TikTok,” Newstead said. “And, it makes no sense for regulators to try and weaken US companies right at the moment we most need them to invest in winning the competition with China for leadership in AI.”

Trump’s FTC appears unlikely to back down

Zuckerberg has been criticized for his supposed last-ditch attempts to push the Trump administration to pause or toss the FTC’s case. Last month, the CEO visited Trump in the Oval Office to discuss a settlement, Politico reported, apparently worrying officials who don’t want Trump to bail out Meta.

On Monday, the FTC did not appear to be wavering, however, prompting alarm bells in the tech industry.

Patrick Hedger, the director of policy for NetChoice—a trade group that represents Meta and other Big Tech companies—warned that if the FTC undoes Meta’s acquisitions, it would harm innovation and competition while damaging trust in the FTC long-term.

“This bait-and-switch against Meta for acquisitions approved over 10 years ago in the fiercely competitive social media marketplace will have serious ripple effects not only for the US tech industry, but across all American businesses,” Hedger said.

Seemingly accusing Donald Trump’s FTC of pursuing Lina Khan’s alleged agenda against Big Tech, Hedger added that “with Meta at the forefront of open-source AI innovation and a global competitor, the outcome of this trial will have spillover into the entire economy. It will create a fear among businesses that making future, pro-competitive investments could be reversed due to political discontent—not the necessary evidence traditionally required for an anticompetitive claim.”

Big Tech on Trial noted that it’s possible that the FTC could “vote to settle, withdraw, or pause the case.” Last month, Trump fired the two Democrats, eliminating a 3–2 split and ensuring only Republicans are steering the agency for now.

But Trump’s FTC seems determined to proceed in attempts to disrupt Meta’s business. FTC Chair Andrew Ferguson told Fox Business Monday that “antitrust laws can help make sure that no private sector company gets so powerful that it affects our lives in ways that are really bad for all Americans,” and “that’s what this trial beginning today is all about.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Zuckerberg’s 2012 email dubbed “smoking gun” at Meta monopoly trial Read More »

meta-plans-to-test-and-tinker-with-x’s-community-notes-algorithm

Meta plans to test and tinker with X’s community notes algorithm

Meta also confirmed that it won’t be reducing visibility of misleading posts with community notes. That’s a change from the prior system, Meta noted, which had penalties associated with fact-checking.

According to Meta, X’s algorithm cannot be gamed, supposedly safeguarding “against organized campaigns” striving to manipulate notes and “influence what notes get published or what they say.” Meta claims it will rely on external research on community notes to avoid that pitfall, but as recently as last October, outside researchers had suggested that X’s Community Notes were easily sabotaged by toxic X users.

“We don’t expect this process to be perfect, but we’ll continue to improve as we learn,” Meta said.

Meta confirmed that the company plans to tweak X’s algorithm over time to develop its own version of community notes, which “may explore different or adjusted algorithms to support how Community Notes are ranked and rated.”

In a post, X’s Support account said that X was “excited” that Meta was using its “well-established, academically studied program as a foundation” for its community notes.

Meta plans to test and tinker with X’s community notes algorithm Read More »

meta-to-cut-5%-of-employees-deemed-unfit-for-zuckerberg’s-ai-fueled-future

Meta to cut 5% of employees deemed unfit for Zuckerberg’s AI-fueled future

Anticipating that 2025 will be an “intense year” requiring rapid innovation, Mark Zuckerberg reportedly announced that Meta would be cutting 5 percent of its workforce—targeting “lowest performers.”

Bloomberg reviewed the internal memo explaining the cuts, which was posted to Meta’s internal Workplace forum Tuesday. In it, Zuckerberg confirmed that Meta was shifting its strategy to “move out low performers faster” so that Meta can hire new talent to fill those vacancies this year.

“I’ve decided to raise the bar on performance management,” Zuckerberg said. “We typically manage out people who aren’t meeting expectations over the course of a year, but now we’re going to do more extensive performance-based cuts during this cycle.”

Cuts will likely impact more than 3,600 employees, as Meta’s most recent headcount in September totaled about 72,000 employees. It may not be as straightforward as letting go anyone with an unsatisfactory performance review, as Zuckerberg said that any employee not currently meeting expectations could be spared if Meta is “optimistic about their future performance,” The Wall Street Journal reported.

Any employees affected will be notified by February 10 and receive “generous severance,” Zuckerberg’s memo promised.

This is the biggest round of cuts at Meta since 2023, when Meta laid off 10,000 employees during what Zuckerberg dubbed the “year of efficiency.” Those layoffs followed a prior round where 11,000 lost their jobs and Zuckerberg realized that “leaner is better.” He told employees in 2023 that a “surprising result” from reducing the workforce was “that many things have gone faster.”

“A leaner org will execute its highest priorities faster,” Zuckerberg wrote in 2023. “People will be more productive, and their work will be more fun and fulfilling. We will become an even greater magnet for the most talented people. That’s why in our Year of Efficiency, we are focused on canceling projects that are duplicative or lower priority and making every organization as lean as possible.”

Meta to cut 5% of employees deemed unfit for Zuckerberg’s AI-fueled future Read More »

mastodon’s-founder-cedes-control,-refuses-to-become-next-musk-or-zuckerberg

Mastodon’s founder cedes control, refuses to become next Musk or Zuckerberg

And perhaps in a nod to Meta’s recent changes, Mastodon also vowed to “invest deeply in trust and safety” and ensure “everyone, especially marginalized communities,” feels “safe” on the platform.

To become a more user-focused paradise of “resilient, governable, open and safe digital spaces,” Mastodon is going to need a lot more funding. The blog called for donations to help fund an annual operating budget of $5.1 million (5 million euros) in 2025. That’s a massive leap from the $152,476 (149,400 euros) total operating expenses Mastodon reported in 2023.

Other social networks wary of EU regulations

Mastodon has decided to continue basing its operations in Europe, while still maintaining a separate US-based nonprofit entity as a “fundraising hub,” the blog said.

It will take time, Mastodon said, to “select the appropriate jurisdiction and structure in Europe” before Mastodon can then “determine which other (subsidiary) legal structures are needed to support operations and sustainability.”

While Mastodon is carefully getting re-settled as a nonprofit in Europe, Zuckerberg this week went on Joe Rogan’s podcast to call on Donald Trump to help US tech companies fight European Union fines, Politico reported.

Some critics suggest the recent policy changes on Meta platforms were intended to win Trump’s favor, partly to get Trump on Meta’s side in the fight against the EU’s strict digital laws. According to France24, Musk’s recent combativeness with EU officials suggests Musk might team up with Zuckerberg in that fight (unlike that cage fight pitting the wealthy tech titans against each other that never happened).

Experts told France24 that EU officials may “perhaps wrongly” already be fearful about ruffling Trump’s feathers by targeting his tech allies and would likely need to use the “full legal arsenal” of EU digital laws to “stand up to Big Tech” once Trump’s next term starts.

As Big Tech prepares to continue battling EU regulators, Mastodon appears to be taking a different route, laying roots in Europe and “establishing the appropriate governance and leadership frameworks that reflect the nature and purpose of Mastodon as a whole” and “responsibly serve the community,” its blog said.

“Our core mission remains the same: to create the tools and digital spaces where people can build authentic, constructive online communities free from ads, data exploitation, manipulative algorithms, or corporate monopolies,” Mastodon’s blog said.

Mastodon’s founder cedes control, refuses to become next Musk or Zuckerberg Read More »

meta-kills-diversity-programs,-claiming-dei-has-become-“too-charged”

Meta kills diversity programs, claiming DEI has become “too charged”

Meta has reportedly ended diversity, equity, and inclusion (DEI) programs that influenced staff hiring and training, as well as vendor decisions, effective immediately.

According to an internal memo viewed by Axios and verified by Ars, Meta’s vice president of human resources, Janelle Gale, told Meta employees that the shift was due to “legal and policy landscape surrounding diversity, equity, and inclusion efforts in the United States is changing.”

It’s another move by Meta that some view as part of the company’s larger effort to align with the incoming Trump administration’s politics. In December, Donald Trump promised to crack down on DEI initiatives at companies and on college campuses, The Guardian reported.

Earlier this week, Meta cut its fact-checking program, which was introduced in 2016 after Trump’s first election to prevent misinformation from spreading. In a statement announcing Meta’s pivot to X’s Community Notes-like approach to fact-checking, Meta CEO Mark Zuckerberg claimed that fact-checkers were “too politically biased” and “destroyed trust” on Meta platforms like Facebook, Instagram, and Threads.

Trump has also long promised to renew his war on alleged social media censorship while in office. Meta faced backlash this week over leaked rule changes relaxing Meta’s hate speech policies, The Intercept reported, which Zuckerberg said were “out of touch with mainstream discourse.”  Those changes included allowing anti-trans slurs previously banned, as well as permitting women to be called “property” and gay people to be called “mentally ill,” Mashable reported. In a statement, GLAAD said that rolling back safety guardrails risked turning Meta platforms into “unsafe landscapes filled with dangerous hate speech, violence, harassment, and misinformation” and alleged that Meta appeared to be willing to “normalize anti-LGBTQ hatred for profit.”

Meta kills diversity programs, claiming DEI has become “too charged” Read More »

meta-axes-third-party-fact-checkers-in-time-for-second-trump-term

Meta axes third-party fact-checkers in time for second Trump term


Zuckerberg says Meta will “work with President Trump” to fight censorship.

Meta CEO Mark Zuckerberg during the Meta Connect event in Menlo Park, California on September 25, 2024.  Credit: Getty Images | Bloomberg

Meta announced today that it’s ending the third-party fact-checking program it introduced in 2016, and will rely instead on a Community Notes approach similar to what’s used on Elon Musk’s X platform.

The end of third-party fact-checking and related changes to Meta policies could help the company make friends in the Trump administration and in governments of conservative-leaning states that have tried to impose legal limits on content moderation. The operator of Facebook and Instagram announced the changes in a blog post and a video message recorded by CEO Mark Zuckerberg.

“Governments and legacy media have pushed to censor more and more. A lot of this is clearly political,” Zuckerberg said. He said the recent elections “feel like a cultural tipping point toward once again prioritizing speech.”

“We’re going to get rid of fact-checkers and replace them with Community Notes, similar to X, starting in the US,” Zuckerberg said. “After Trump first got elected in 2016, the legacy media wrote nonstop about how misinformation was a threat to democracy. We tried in good faith to address those concerns without becoming the arbiters of truth. But the fact-checkers have just been too politically biased and have destroyed more trust than they’ve created, especially in the US.”

Meta says the soon-to-be-discontinued fact-checking program includes over 90 third-party organizations that evaluate posts in over 60 languages. The US-based fact-checkers are AFP USA, Check Your Fact, Factcheck.org, Lead Stories, PolitiFact, Science Feedback, Reuters Fact Check, TelevisaUnivision, The Dispatch, and USA Today.

The independent fact-checkers rate the accuracy of posts and apply ratings such as False, Altered, Partly False, Missing Context, Satire, and True. Meta adds notices to posts rated as false or misleading and notifies users before they try to share the content or if they shared it in the past.

Meta: Experts “have their own biases”

In the blog post that accompanied Zuckerberg’s video message, Chief Global Affairs Officer Joel Kaplan said the 2016 decision to use independent fact-checkers seemed like “the best and most reasonable choice at the time… The intention of the program was to have these independent experts give people more information about the things they see online, particularly viral hoaxes, so they were able to judge for themselves what they saw and read.”

But experts “have their own biases and perspectives,” and the program imposed “intrusive labels and reduced distribution” of content “that people would understand to be legitimate political speech and debate,” Kaplan wrote.

The X-style Community Notes system lets the community “decide when posts are potentially misleading and need more context, and people across a diverse range of perspectives decide what sort of context is helpful for other users to see… Just like they do on X, Community Notes [on Meta sites] will require agreement between people with a range of perspectives to help prevent biased ratings,” Kaplan wrote.

The end of third-party fact-checking will be implemented in the US before other countries. Meta will also move its internal trust and safety and content moderation teams out of California, Zuckerberg said. “Our US-based content review is going to be based in Texas. As we work to promote free expression, I think it will help us build trust to do this work in places where there is less concern about the bias of our teams,” he said. Meta will continue to take “legitimately bad stuff” like drugs, terrorism, and child exploitation “very seriously,” Zuckerberg said.

Zuckerberg pledges to work with Trump

Meta will “phase in a more comprehensive community notes system” over the next couple of months, Zuckerberg said. Meta, which donated $1 million to Trump’s inaugural fund, will also “work with President Trump to push back on governments around the world that are going after American companies and pushing to censor more,” Zuckerberg said.

Zuckerberg said that “Europe has an ever-increasing number of laws institutionalizing censorship,” that “Latin American countries have secret courts that can quietly order companies to take things down,” and that “China has censored apps from even working in the country.” Meta needs “the support of the US government” to push back against other countries’ content-restriction orders, he said.

“That’s why it’s been so difficult over the past four years when even the US government has pushed for censorship,” Zuckerberg said, referring to the Biden administration. “By going after US and other American companies, it has emboldened other governments to go even further. But now we have the opportunity to restore free expression, and I am excited to take it.”

Brendan Carr, Trump’s pick to lead the Federal Communications Commission, praised Meta’s policy changes. Carr has promised to shift the FCC’s focus from regulating telecom companies to cracking down on Big Tech and media companies that he alleges are part of a “censorship cartel.”

“President Trump’s resolute and strong support for the free speech rights of everyday Americans is already paying dividends,” Carr wrote on X today. “Facebook’s announcements is [sic] a good step in the right direction. I look forward to monitoring these developments and their implementation. The work continues until the censorship cartel is completely dismantled and destroyed.”

Group: Meta is “saying the truth doesn’t matter”

Meta’s changes were criticized by Public Citizen, a nonprofit advocacy group founded by Ralph Nader. “Asking users to fact-check themselves is tantamount to Meta saying the truth doesn’t matter,” Public Citizen co-president Lisa Gilbert said. “Misinformation will flow more freely with this policy change, as we cannot assume that corrections will be made when false information proliferates. The American people deserve accurate information about our elections, health risks, the environment, and much more.”

Media advocacy group Free Press said that “Zuckerberg is one of many billionaires who are cozying up to dangerous demagogues like Trump and pushing initiatives that favor their bottom lines at the expense of everything and everyone else.” Meta appears to be abandoning its “responsibility to protect its many users, and align[ing] the company more closely with an incoming president who’s a known enemy of accountability,” Free Press Senior Counsel Nora Benavidez said.

X’s Community Notes system was criticized in a recent report by the Center for Countering Digital Hate (CCDH), which said it “found that 74 percent of accurate community notes on US election misinformation never get shown to users.” (X previously sued the CCDH, but the lawsuit was dismissed by a federal judge.)

Previewing other changes, Zuckerberg said that Meta will eliminate content restrictions “that are just out of touch with mainstream discourse” and change how it enforces policies “to reduce the mistakes that account for the vast majority of censorship on our platforms.”

“We used to have filters that scanned for any policy violation. Now, we’re going to focus those filters on tackling illegal and high-severity violations, and for lower severity violations, we’re going to rely on someone reporting an issue before we take action,” he said. “The problem is the filters make mistakes, and they take down a lot of content that they shouldn’t. So by dialing them back, we’re going to dramatically reduce the amount of censorship on our platforms.”

Meta to relax filters, recommend more political content

Zuckerberg said Meta will re-tune content filters “to require much higher confidence before taking down content.” He said this means Meta will “catch less bad stuff” but will “also reduce the number of innocent people’s posts and accounts that we accidentally take down.”

Meta has “built a lot of complex systems to moderate content,” he noted. Even if these systems “accidentally censor just 1 percent of posts, that’s millions of people, and we’ve reached a point where it’s just too many mistakes and too much censorship,” he said.

Kaplan wrote that Meta has censored too much harmless content and that “too many people find themselves wrongly locked up in ‘Facebook jail.'”

“In recent years we’ve developed increasingly complex systems to manage content across our platforms, partly in response to societal and political pressure to moderate content,” Kaplan wrote. “This approach has gone too far. As well-intentioned as many of these efforts have been, they have expanded over time to the point where we are making too many mistakes, frustrating our users and too often getting in the way of the free expression we set out to enable.”

Another upcoming change is that Meta will recommend more political posts. “For a while, the community asked to see less politics because it was making people stressed, so we stopped recommending these posts,” Zuckerberg said. “But it feels like we’re in a new era now, and we’re starting to get feedback that people want to see this content again, so we’re going to start phasing this back into Facebook, Instagram, and Threads while working to keep the communities friendly and positive.”

Photo of Jon Brodkin

Jon is a Senior IT Reporter for Ars Technica. He covers the telecom industry, Federal Communications Commission rulemakings, broadband consumer affairs, court cases, and government regulation of the tech industry.

Meta axes third-party fact-checkers in time for second Trump term Read More »

rfk-jr’s-anti-vaccine-group-can’t-sue-meta-for-agreeing-with-cdc,-judge-rules

RFK Jr’s anti-vaccine group can’t sue Meta for agreeing with CDC, judge rules

Independent presidential candidate Robert F. Kennedy Jr.

Enlarge / Independent presidential candidate Robert F. Kennedy Jr.

The Children’s Health Defense (CHD), an anti-vaccine group founded by Robert F. Kennedy Jr, has once again failed to convince a court that Meta acted as a state agent when censoring the group’s posts and ads on Facebook and Instagram.

In his opinion affirming a lower court’s dismissal, US Ninth Circuit Court of Appeals Judge Eric Miller wrote that CHD failed to prove that Meta acted as an arm of the government in censoring posts. Concluding that Meta’s right to censor views that the platforms find “distasteful” is protected by the First Amendment, Miller denied CHD’s requested relief, which had included an injunction and civil monetary damages.

“Meta evidently believes that vaccines are safe and effective and that their use should be encouraged,” Miller wrote. “It does not lose the right to promote those views simply because they happen to be shared by the government.”

CHD told Reuters that the group “was disappointed with the decision and considering its legal options.”

The group first filed the complaint in 2020, arguing that Meta colluded with government officials to censor protected speech by labeling anti-vaccine posts as misleading or removing and shadowbanning CHD posts. This caused CHD’s traffic on the platforms to plummet, CHD claimed, and ultimately, its pages were removed from both platforms.

However, critically, Miller wrote, CHD did not allege that “the government was actually involved in the decisions to label CHD’s posts as ‘false’ or ‘misleading,’ the decision to put the warning label on CHD’s Facebook page, or the decisions to ‘demonetize’ or ‘shadow-ban.'”

“CHD has not alleged facts that allow us to infer that the government coerced Meta into implementing a specific policy,” Miller wrote.

Instead, Meta “was entitled to encourage” various “input from the government,” justifiably seeking vaccine-related information provided by the World Health Organization (WHO) and the US Centers for Disease Control and Prevention (CDC) as it navigated complex content moderation decisions throughout the pandemic, Miller wrote.

Therefore, Meta’s actions against CHD were due to “Meta’s own ‘policy of censoring,’ not any provision of federal law,” Miller concluded. “The evidence suggested that Meta had independent incentives to moderate content and exercised its own judgment in so doing.”

None of CHD’s theories that Meta coordinated with officials to deprive “CHD of its constitutional rights” were plausible, Miller wrote, whereas the “innocent alternative”—”that Meta adopted the policy it did simply because” CEO Mark Zuckerberg and Meta “share the government’s view that vaccines are safe and effective”—appeared “more plausible.”

Meta “does not become an agent of the government just because it decides that the CDC sometimes has a point,” Miller wrote.

Equally not persuasive were CHD’s notions that Section 230 immunity—which shields platforms from liability for third-party content—”‘removed all legal barriers’ to the censorship of vaccine-related speech,” such that “Meta’s restriction of that content should be considered state action.”

“That Section 230 operates in the background to immunize Meta if it chooses to suppress vaccine misinformation—whether because it shares the government’s health concerns or for independent commercial reasons—does not transform Meta’s choice into state action,” Miller wrote.

One judge dissented over Section 230 concerns

In his dissenting opinion, Judge Daniel Collins defended CHD’s Section 230 claim, however, suggesting that the appeals court erred and should have granted CHD injunctive and declaratory relief from alleged censorship. CHD CEO Mary Holland told The Defender that the group was pleased the decision was not unanimous.

According to Collins, who like Miller is a Trump appointee, Meta could never have built its massive social platforms without Section 230 immunity, which grants platforms the ability to broadly censor viewpoints they disfavor.

It was “important to keep in mind” that “the vast practical power that Meta exercises over the speech of millions of others ultimately rests on a government-granted privilege to which Meta is not constitutionally entitled,” Collins wrote. And this power “makes a crucial difference in the state-action analysis.”

As Collins sees it, CHD could plausibly allege that Meta’s communications with government officials about vaccine-related misinformation targeted specific users, like the “disinformation dozen” that includes both CHD and Kennedy. In that case, it appears possible to Collins that Section 230 provides a potential opportunity for government to target speech that it disfavors through mechanisms provided by the platforms.

“Having specifically and purposefully created an immunized power for mega-platform operators to freely censor the speech of millions of persons on those platforms, the Government is perhaps unsurprisingly tempted to then try to influence particular uses of such dangerous levers against protected speech expressing viewpoints the Government does not like,” Collins warned.

He further argued that “Meta’s relevant First Amendment rights” do not “give Meta an unbounded freedom to work with the Government in suppressing speech on its platforms.” Disagreeing with the majority, he wrote that “in this distinctive scenario, applying the state-action doctrine promotes individual liberty by keeping the Government’s hands away from the tempting levers of censorship on these vast platforms.”

The majority agreed, however, that while Section 230 immunity “is undoubtedly a significant benefit to companies like Meta,” lawmakers’ threats to weaken Section 230 did not suggest that Meta’s anti-vaccine policy was coerced state action.

“Many companies rely, in one way or another, on a favorable regulatory environment or the goodwill of the government,” Miller wrote. “If that were enough for state action, every large government contractor would be a state actor. But that is not the law.”

RFK Jr’s anti-vaccine group can’t sue Meta for agreeing with CDC, judge rules Read More »

meta-risks-sanctions-over-“sneaky”-ad-free-plans-confusing-users,-eu-says

Meta risks sanctions over “sneaky” ad-free plans confusing users, EU says

Under pressure —

Consumer laws may change Meta’s ad-free plans before EU’s digital crackdown does.

Meta risks sanctions over “sneaky” ad-free plans confusing users, EU says

The European Commission (EC) has finally taken action to block Meta’s heavily criticized plan to charge a subscription fee to users who value privacy on its platforms.

Surprisingly, this step wasn’t taken under laws like the Digital Services Act (DSA), the Digital Markets Act (DMA), or the General Data Protection Regulation (GDPR).

Instead, the EC announced Monday that Meta risked sanctions under EU consumer laws if it could not resolve key concerns about Meta’s so-called “pay or consent” model.

Meta’s model is seemingly problematic, the commission said, because Meta “requested consumers overnight to either subscribe to use Facebook and Instagram against a fee or to consent to Meta’s use of their personal data to be shown personalized ads, allowing Meta to make revenue out of it.”

Because users were given such short notice, they may have been “exposed to undue pressure to choose rapidly between the two models, fearing that they would instantly lose access to their accounts and their network of contacts,” the EC said.

To protect consumers, the EC joined national consumer protection authorities, sending a letter to Meta requiring the tech giant to propose solutions to resolve the commission’s biggest concerns by September 1.

That Meta’s “pay or consent” model may be “misleading” is a top concern because it uses the term “free” for ad-based plans, even though Meta “can make revenue from using their personal data to show them personalized ads.” It seems that while Meta does not consider giving away personal information to be a cost to users, the EC’s commissioner for justice, Didier Reynders, apparently does.

“Consumers must not be lured into believing that they would either pay and not be shown any ads anymore, or receive a service for free, when, instead, they would agree that the company used their personal data to make revenue with ads,” Reynders said. “EU consumer protection law is clear in this respect. Traders must inform consumers upfront and in a fully transparent manner on how they use their personal data. This is a fundamental right that we will protect.”

Additionally, the EC is concerned that Meta users might be confused about how “to navigate through different screens in the Facebook/Instagram app or web-version and to click on hyperlinks directing them to different parts of the Terms of Service or Privacy Policy to find out how their preferences, personal data, and user-generated data will be used by Meta to show them personalized ads.” They may also find Meta’s “imprecise terms and language” confusing, such as Meta referring to “your info” instead of clearly referring to consumers’ “personal data.”

To resolve the EC’s concerns, Meta may have to give EU users more time to decide if they want to pay to subscribe or consent to personal data collection for targeted ads. Or Meta may have to take more drastic steps by altering language and screens used when securing consent to collect data or potentially even scrapping its “pay or consent” model entirely, as pressure in the EU mounts.

So far, Meta has defended its model against claims that it violates the DMA, the DSA, and the GDPR, and Meta’s spokesperson told Ars that Meta continues to defend the model while facing down the EC’s latest action.

“Subscriptions as an alternative to advertising are a well-established business model across many industries,” Meta’s spokesperson told Ars. “Subscription for no ads follows the direction of the highest court in Europe and we are confident it complies with European regulation.”

Meta’s model is “sneaky,” EC said

Since last year, the social media company has argued that its “subscription for no ads” model was “endorsed” by the highest court in Europe, the Court of Justice of the European Union (CJEU).

However, privacy advocates have noted that this alleged endorsement came following a CJEU case under the GDPR and was only presented as a hypothetical, rather than a formal part of the ruling, as Meta seems to interpret.

What the CJEU said was that “users must be free to refuse individually”—”in the context of” signing up for services—”to give their consent to particular data processing operations not necessary” for Meta to provide such services “without being obliged to refrain entirely from using the service.” That “means that those users are to be offered, if necessary for an appropriate fee, an equivalent alternative not accompanied by such data processing operations,” the CJEU said.

The nuance here may matter when it comes to Meta’s proposed solutions even if the EC accepts the CJEU’s suggestion of an acceptable alternative as setting some sort of legal precedent. Because the consumer protection authorities raised the action due to Meta suddenly changing the consent model for existing users—not “in the context of” signing up for services—Meta may struggle to persuade the EC that existing users weren’t misled and pressured into paying for a subscription or consenting to ads, given how fast Meta’s policy shifted.

Meta risks sanctions if a compromise can’t be reached, the EC said. Under the EU’s Unfair Contract Terms Directive, for example, Meta could be fined up to 4 percent of its annual turnover if consumer protection authorities are unsatisfied with Meta’s proposed solutions.

The EC’s vice president for values and transparency, Věra Jourová, provided a statement in the press release, calling Meta’s abrupt introduction of the “pay or consent” model “sneaky.”

“We are proud of our strong consumer protection laws which empower Europeans to have the right to be accurately informed about changes such as the one proposed by Meta,” Jourová said. “In the EU, consumers are able to make truly informed choices and we now take action to safeguard this right.”

Meta risks sanctions over “sneaky” ad-free plans confusing users, EU says Read More »

meta-halts-plans-to-train-ai-on-facebook,-instagram-posts-in-eu

Meta halts plans to train AI on Facebook, Instagram posts in EU

Not so fast —

Meta was going to start training AI on Facebook and Instagram posts on June 26.

Meta halts plans to train AI on Facebook, Instagram posts in EU

Meta has apparently paused plans to process mounds of user data to bring new AI experiences to Europe.

The decision comes after data regulators rebuffed the tech giant’s claims that it had “legitimate interests” in processing European Union- and European Economic Area (EEA)-based Facebook and Instagram users’ data—including personal posts and pictures—to train future AI tools.

There’s not much information available yet on Meta’s decision. But Meta’s EU regulator, the Irish Data Protection Commission (DPC), posted a statement confirming that Meta made the move after ongoing discussions with the DPC about compliance with the EU’s strict data privacy laws, including the General Data Protection Regulation (GDPR).

“The DPC welcomes the decision by Meta to pause its plans to train its large language model using public content shared by adults on Facebook and Instagram across the EU/EEA,” the DPC said. “This decision followed intensive engagement between the DPC and Meta. The DPC, in co-operation with its fellow EU data protection authorities, will continue to engage with Meta on this issue.”

The European Center for Digital Rights, known as Noyb, had filed 11 complaints across the EU and intended to file more to stop Meta from moving forward with its AI plans. The DPC initially gave Meta AI the green light to proceed but has now made a U-turn, Noyb said.

Meta’s policy still requires update

In a blog, Meta had previously teased new AI features coming to the EU, including everything from customized stickers for chats and stories to Meta AI, a “virtual assistant you can access to answer questions, generate images, and more.” Meta had argued that training on EU users’ personal data was necessary so that AI services could reflect “the diverse cultures and languages of the European communities who will use them.”

Before the pause, the company had been hoping to rely “on the legal basis of ‘legitimate interests’” to process the data, because it’s needed “to improve AI at Meta.” But Noyb and EU data regulators had argued that Meta’s legal basis did not comply with the GDPR, with the Norwegian Data Protection Authority arguing that “the most natural thing would have been to ask the users for their consent before their posts and images are used in this way.”

Rather than ask for consent, however, Meta had given EU users until June 26 to opt out. Noyb had alleged that in going this route, Meta planned to use “dark patterns” to thwart AI opt-outs in the EU and collect as much data as possible to fuel undisclosed AI technologies. Noyb urgently argued that once users’ data is in the system, “users seem to have no option of ever having it removed.”

Noyb said that the “obvious explanation” for Meta seemingly halting its plans was pushback from EU officials, but the privacy advocacy group also warned EU users that Meta’s privacy policy has not yet been fully updated to reflect the pause.

“We welcome this development but will monitor this closely,” Max Schrems, Noyb chair, said in a statement provided to Ars. “So far there is no official change of the Meta privacy policy, which would make this commitment legally binding. The cases we filed are ongoing and will need a determination.”

Ars was not immediately able to reach Meta for comment.

Meta halts plans to train AI on Facebook, Instagram posts in EU Read More »

“csam-generated-by-ai-is-still-csam,”-doj-says-after-rare-arrest

“CSAM generated by AI is still CSAM,” DOJ says after rare arrest

“CSAM generated by AI is still CSAM,” DOJ says after rare arrest

The US Department of Justice has started cracking down on the use of AI image generators to produce child sexual abuse materials (CSAM).

On Monday, the DOJ arrested Steven Anderegg, a 42-year-old “extremely technologically savvy” Wisconsin man who allegedly used Stable Diffusion to create “thousands of realistic images of prepubescent minors,” which were then distributed on Instagram and Telegram.

The cops were tipped off to Anderegg’s alleged activities after Instagram flagged direct messages that were sent on Anderegg’s Instagram account to a 15-year-old boy. Instagram reported the messages to the National Center for Missing and Exploited Children (NCMEC), which subsequently alerted law enforcement.

During the Instagram exchange, the DOJ found that Anderegg sent sexually explicit AI images of minors soon after the teen made his age known, alleging that “the only reasonable explanation for sending these images was to sexually entice the child.”

According to the DOJ’s indictment, Anderegg is a software engineer with “professional experience working with AI.” Because of his “special skill” in generative AI (GenAI), he was allegedly able to generate the CSAM using a version of Stable Diffusion, “along with a graphical user interface and special add-ons created by other Stable Diffusion users that specialized in producing genitalia.”

After Instagram reported Anderegg’s messages to the minor, cops seized Anderegg’s laptop and found “over 13,000 GenAI images, with hundreds—if not thousands—of these images depicting nude or semi-clothed prepubescent minors lasciviously displaying or touching their genitals” or “engaging in sexual intercourse with men.”

In his messages to the teen, Anderegg seemingly “boasted” about his skill in generating CSAM, the indictment said. The DOJ alleged that evidence from his laptop showed that Anderegg “used extremely specific and explicit prompts to create these images,” including “specific ‘negative’ prompts—that is, prompts that direct the GenAI model on what not to include in generated content—to avoid creating images that depict adults.” These go-to prompts were stored on his computer, the DOJ alleged.

Anderegg is currently in federal custody and has been charged with production, distribution, and possession of AI-generated CSAM, as well as “transferring obscene material to a minor under the age of 16,” the indictment said.

Because the DOJ suspected that Anderegg intended to use the AI-generated CSAM to groom a minor, the DOJ is arguing that there are “no conditions of release” that could prevent him from posing a “significant danger” to his community while the court mulls his case. The DOJ warned the court that it’s highly likely that any future contact with minors could go unnoticed, as Anderegg is seemingly tech-savvy enough to hide any future attempts to send minors AI-generated CSAM.

“He studied computer science and has decades of experience in software engineering,” the indictment said. “While computer monitoring may address the danger posed by less sophisticated offenders, the defendant’s background provides ample reason to conclude that he could sidestep such restrictions if he decided to. And if he did, any reoffending conduct would likely go undetected.”

If convicted of all four counts, he could face “a total statutory maximum penalty of 70 years in prison and a mandatory minimum of five years in prison,” the DOJ said. Partly because of “special skill in GenAI,” the DOJ—which described its evidence against Anderegg as “strong”—suggested that they may recommend a sentencing range “as high as life imprisonment.”

Announcing Anderegg’s arrest, Deputy Attorney General Lisa Monaco made it clear that creating AI-generated CSAM is illegal in the US.

“Technology may change, but our commitment to protecting children will not,” Monaco said. “The Justice Department will aggressively pursue those who produce and distribute child sexual abuse material—or CSAM—no matter how that material was created. Put simply, CSAM generated by AI is still CSAM, and we will hold accountable those who exploit AI to create obscene, abusive, and increasingly photorealistic images of children.”

“CSAM generated by AI is still CSAM,” DOJ says after rare arrest Read More »

robert-f-kennedy-jr.-sues-meta,-citing-chatbot’s-reply-as-evidence-of-shadowban

Robert F. Kennedy Jr. sues Meta, citing chatbot’s reply as evidence of shadowban

Screenshot from the documentary <em>Who Is Bobby Kennedy?</em>” src=”https://cdn.arstechnica.net/wp-content/uploads/2024/05/Who-Is-Bobby-Kennedy-screenshot-via-YouTube-800×422.jpg”></img><figcaption>
<p><a data-height=Enlarge / Screenshot from the documentary Who Is Bobby Kennedy?

In a lawsuit that seems determined to ignore that Section 230 exists, Robert F. Kennedy Jr. has sued Meta for allegedly shadowbanning his million-dollar documentary, Who Is Bobby Kennedy? and preventing his supporters from advocating for his presidential campaign.

According to Kennedy, Meta is colluding with the Biden administration to sway the 2024 presidential election by suppressing Kennedy’s documentary and making it harder to support Kennedy’s candidacy. This allegedly has caused “substantial donation losses,” while also violating the free speech rights of Kennedy, his supporters, and his film’s production company, AV24.

Meta had initially restricted the documentary on Facebook and Instagram but later fixed the issue after discovering that the film was mistakenly flagged by the platforms’ automated spam filters.

But Kennedy’s complaint claimed that Meta is still “brazenly censoring speech” by “continuing to throttle, de-boost, demote, and shadowban the film.” In an exhibit, Kennedy’s lawyers attached screenshots representing “hundreds” of Facebook and Instagram users whom Meta allegedly sent threats, intimidated, and sanctioned after they shared the documentary.

Some of these users remain suspended on Meta platforms, the complaint alleged. Others whose temporary suspensions have been lifted claimed that their posts are still being throttled, though, and Kennedy’s lawyers earnestly insisted that an exchange with Meta’s chatbot proves it.

Two days after the documentary’s release, Kennedy’s team apparently asked the Meta AI assistant, “When users post the link whoisbobbykennedy.com, can their followers see the post in their feeds?”

“I can tell you that the link is currently restricted by Meta,” the chatbot answered.

Chatbots, of course, are notoriously inaccurate sources of information, and Meta AI’s terms of service note this. In a section labeled “accuracy,” Meta warns that chatbot responses “may not reflect accurate, complete, or current information” and should always be verified.

Perhaps more significantly, there is little reason to think that Meta’s chatbot would have access to information about internal content moderation decisions.

Techdirt’s Mike Masnick mocked Kennedy’s reliance on the chatbot in the case. He noted that Kennedy seemed to have no evidence of the alleged shadow-banning, while there’s plenty of evidence that Meta’s spam filters accidentally remove non-violative content all the time.

Meta’s chatbot is “just a probabilistic stochastic parrot, repeating a probable sounding answer to users’ questions,” Masnick wrote. “And these idiots think it’s meaningful evidence. This is beyond embarrassing.”

Neither Meta nor Kennedy’s lawyer, Jed Rubenfeld, responded to Ars’ request to comment.

Robert F. Kennedy Jr. sues Meta, citing chatbot’s reply as evidence of shadowban Read More »