Anticipating that 2025 will be an “intense year” requiring rapid innovation, Mark Zuckerberg reportedly announced that Meta would be cutting 5 percent of its workforce—targeting “lowest performers.”
Bloomberg reviewed the internal memo explaining the cuts, which was posted to Meta’s internal Workplace forum Tuesday. In it, Zuckerberg confirmed that Meta was shifting its strategy to “move out low performers faster” so that Meta can hire new talent to fill those vacancies this year.
“I’ve decided to raise the bar on performance management,” Zuckerberg said. “We typically manage out people who aren’t meeting expectations over the course of a year, but now we’re going to do more extensive performance-based cuts during this cycle.”
Cuts will likely impact more than 3,600 employees, as Meta’s most recent headcount in September totaled about 72,000 employees. It may not be as straightforward as letting go anyone with an unsatisfactory performance review, as Zuckerberg said that any employee not currently meeting expectations could be spared if Meta is “optimistic about their future performance,” The Wall Street Journal reported.
Any employees affected will be notified by February 10 and receive “generous severance,” Zuckerberg’s memo promised.
This is the biggest round of cuts at Meta since 2023, when Meta laid off 10,000 employees during what Zuckerberg dubbed the “year of efficiency.” Those layoffs followed a prior round where 11,000 lost their jobs and Zuckerberg realized that “leaner is better.” He told employees in 2023 that a “surprising result” from reducing the workforce was “that many things have gone faster.”
“A leaner org will execute its highest priorities faster,” Zuckerberg wrote in 2023. “People will be more productive, and their work will be more fun and fulfilling. We will become an even greater magnet for the most talented people. That’s why in our Year of Efficiency, we are focused on canceling projects that are duplicative or lower priority and making every organization as lean as possible.”
And perhaps in a nod to Meta’s recent changes, Mastodon also vowed to “invest deeply in trust and safety” and ensure “everyone, especially marginalized communities,” feels “safe” on the platform.
To become a more user-focused paradise of “resilient, governable, open and safe digital spaces,” Mastodon is going to need a lot more funding. The blog called for donations to help fund an annual operating budget of $5.1 million (5 million euros) in 2025. That’s a massive leap from the $152,476 (149,400 euros) total operating expenses Mastodon reported in 2023.
Other social networks wary of EU regulations
Mastodon has decided to continue basing its operations in Europe, while still maintaining a separate US-based nonprofit entity as a “fundraising hub,” the blog said.
It will take time, Mastodon said, to “select the appropriate jurisdiction and structure in Europe” before Mastodon can then “determine which other (subsidiary) legal structures are needed to support operations and sustainability.”
While Mastodon is carefully getting re-settled as a nonprofit in Europe, Zuckerberg this week went on Joe Rogan’s podcast to call on Donald Trump to help US tech companies fight European Union fines, Politico reported.
Experts told France24 that EU officials may “perhaps wrongly” already be fearful about ruffling Trump’s feathers by targeting his tech allies and would likely need to use the “full legal arsenal” of EU digital laws to “stand up to Big Tech” once Trump’s next term starts.
As Big Tech prepares to continue battling EU regulators, Mastodon appears to be taking a different route, laying roots in Europe and “establishing the appropriate governance and leadership frameworks that reflect the nature and purpose of Mastodon as a whole” and “responsibly serve the community,” its blog said.
“Our core mission remains the same: to create the tools and digital spaces where people can build authentic, constructive online communities free from ads, data exploitation, manipulative algorithms, or corporate monopolies,” Mastodon’s blog said.
Meta has reportedly ended diversity, equity, and inclusion (DEI) programs that influenced staff hiring and training, as well as vendor decisions, effective immediately.
According to an internal memo viewed by Axios and verified by Ars, Meta’s vice president of human resources, Janelle Gale, told Meta employees that the shift was due to “legal and policy landscape surrounding diversity, equity, and inclusion efforts in the United States is changing.”
It’s another move by Meta that some view as part of the company’s larger effort to align with the incoming Trump administration’s politics. In December, Donald Trump promised to crack down on DEI initiatives at companies and on college campuses, The Guardian reported.
Earlier this week, Meta cut its fact-checking program, which was introduced in 2016 after Trump’s first election to prevent misinformation from spreading. In a statement announcing Meta’s pivot to X’s Community Notes-like approach to fact-checking, Meta CEO Mark Zuckerberg claimed that fact-checkers were “too politically biased” and “destroyed trust” on Meta platforms like Facebook, Instagram, and Threads.
Trump has also long promised to renew his war on alleged social media censorship while in office. Meta faced backlash this week over leaked rule changes relaxing Meta’s hate speech policies, The Intercept reported, which Zuckerberg said were “out of touch with mainstream discourse.” Those changes included allowing anti-trans slurs previously banned, as well as permitting women to be called “property” and gay people to be called “mentally ill,” Mashable reported. In a statement, GLAAD said that rolling back safety guardrails risked turning Meta platforms into “unsafe landscapes filled with dangerous hate speech, violence, harassment, and misinformation” and alleged that Meta appeared to be willing to “normalize anti-LGBTQ hatred for profit.”
Zuckerberg says Meta will “work with President Trump” to fight censorship.
Meta CEO Mark Zuckerberg during the Meta Connect event in Menlo Park, California on September 25, 2024. Credit: Getty Images | Bloomberg
Meta announced today that it’s ending the third-party fact-checking program it introduced in 2016, and will rely instead on a Community Notes approach similar to what’s used on Elon Musk’s X platform.
The end of third-party fact-checking and related changes to Meta policies could help the company make friends in the Trump administration and in governments of conservative-leaning states that have tried to impose legal limits on content moderation. The operator of Facebook and Instagram announced the changes in a blog post and a video message recorded by CEO Mark Zuckerberg.
“Governments and legacy media have pushed to censor more and more. A lot of this is clearly political,” Zuckerberg said. He said the recent elections “feel like a cultural tipping point toward once again prioritizing speech.”
“We’re going to get rid of fact-checkers and replace them with Community Notes, similar to X, starting in the US,” Zuckerberg said. “After Trump first got elected in 2016, the legacy media wrote nonstop about how misinformation was a threat to democracy. We tried in good faith to address those concerns without becoming the arbiters of truth. But the fact-checkers have just been too politically biased and have destroyed more trust than they’ve created, especially in the US.”
Meta says the soon-to-be-discontinued fact-checking program includes over 90 third-party organizations that evaluate posts in over 60 languages. The US-based fact-checkers are AFP USA, Check Your Fact, Factcheck.org, Lead Stories, PolitiFact, Science Feedback, Reuters Fact Check, TelevisaUnivision, The Dispatch, and USA Today.
The independent fact-checkers rate the accuracy of posts and apply ratings such as False, Altered, Partly False, Missing Context, Satire, and True. Meta adds notices to posts rated as false or misleading and notifies users before they try to share the content or if they shared it in the past.
Meta: Experts “have their own biases”
In the blog post that accompanied Zuckerberg’s video message, Chief Global Affairs Officer Joel Kaplan said the 2016 decision to use independent fact-checkers seemed like “the best and most reasonable choice at the time… The intention of the program was to have these independent experts give people more information about the things they see online, particularly viral hoaxes, so they were able to judge for themselves what they saw and read.”
But experts “have their own biases and perspectives,” and the program imposed “intrusive labels and reduced distribution” of content “that people would understand to be legitimate political speech and debate,” Kaplan wrote.
The X-style Community Notes system lets the community “decide when posts are potentially misleading and need more context, and people across a diverse range of perspectives decide what sort of context is helpful for other users to see… Just like they do on X, Community Notes [on Meta sites] will require agreement between people with a range of perspectives to help prevent biased ratings,” Kaplan wrote.
The end of third-party fact-checking will be implemented in the US before other countries. Meta will also move its internal trust and safety and content moderation teams out of California, Zuckerberg said. “Our US-based content review is going to be based in Texas. As we work to promote free expression, I think it will help us build trust to do this work in places where there is less concern about the bias of our teams,” he said. Meta will continue to take “legitimately bad stuff” like drugs, terrorism, and child exploitation “very seriously,” Zuckerberg said.
Zuckerberg pledges to work with Trump
Meta will “phase in a more comprehensive community notes system” over the next couple of months, Zuckerberg said. Meta, which donated $1 million to Trump’s inaugural fund, will also “work with President Trump to push back on governments around the world that are going after American companies and pushing to censor more,” Zuckerberg said.
Zuckerberg said that “Europe has an ever-increasing number of laws institutionalizing censorship,” that “Latin American countries have secret courts that can quietly order companies to take things down,” and that “China has censored apps from even working in the country.” Meta needs “the support of the US government” to push back against other countries’ content-restriction orders, he said.
“That’s why it’s been so difficult over the past four years when even the US government has pushed for censorship,” Zuckerberg said, referring to the Biden administration. “By going after US and other American companies, it has emboldened other governments to go even further. But now we have the opportunity to restore free expression, and I am excited to take it.”
Brendan Carr, Trump’s pick to lead the Federal Communications Commission, praised Meta’s policy changes. Carr has promised to shift the FCC’s focus from regulating telecom companies to cracking down on Big Tech and media companies that he alleges are part of a “censorship cartel.”
“President Trump’s resolute and strong support for the free speech rights of everyday Americans is already paying dividends,” Carr wrote on X today. “Facebook’s announcements is [sic] a good step in the right direction. I look forward to monitoring these developments and their implementation. The work continues until the censorship cartel is completely dismantled and destroyed.”
Group: Meta is “saying the truth doesn’t matter”
Meta’s changes were criticized by Public Citizen, a nonprofit advocacy group founded by Ralph Nader. “Asking users to fact-check themselves is tantamount to Meta saying the truth doesn’t matter,” Public Citizen co-president Lisa Gilbert said. “Misinformation will flow more freely with this policy change, as we cannot assume that corrections will be made when false information proliferates. The American people deserve accurate information about our elections, health risks, the environment, and much more.”
Media advocacy group Free Press said that “Zuckerberg is one of many billionaires who are cozying up to dangerous demagogues like Trump and pushing initiatives that favor their bottom lines at the expense of everything and everyone else.” Meta appears to be abandoning its “responsibility to protect its many users, and align[ing] the company more closely with an incoming president who’s a known enemy of accountability,” Free Press Senior Counsel Nora Benavidez said.
X’s Community Notes system was criticized in a recent report by the Center for Countering Digital Hate (CCDH), which said it “found that 74 percent of accurate community notes on US election misinformation never get shown to users.” (X previously sued the CCDH, but the lawsuit was dismissed by a federal judge.)
Previewing other changes, Zuckerberg said that Meta will eliminate content restrictions “that are just out of touch with mainstream discourse” and change how it enforces policies “to reduce the mistakes that account for the vast majority of censorship on our platforms.”
“We used to have filters that scanned for any policy violation. Now, we’re going to focus those filters on tackling illegal and high-severity violations, and for lower severity violations, we’re going to rely on someone reporting an issue before we take action,” he said. “The problem is the filters make mistakes, and they take down a lot of content that they shouldn’t. So by dialing them back, we’re going to dramatically reduce the amount of censorship on our platforms.”
Meta to relax filters, recommend more political content
Zuckerberg said Meta will re-tune content filters “to require much higher confidence before taking down content.” He said this means Meta will “catch less bad stuff” but will “also reduce the number of innocent people’s posts and accounts that we accidentally take down.”
Meta has “built a lot of complex systems to moderate content,” he noted. Even if these systems “accidentally censor just 1 percent of posts, that’s millions of people, and we’ve reached a point where it’s just too many mistakes and too much censorship,” he said.
Kaplan wrote that Meta has censored too much harmless content and that “too many people find themselves wrongly locked up in ‘Facebook jail.'”
“In recent years we’ve developed increasingly complex systems to manage content across our platforms, partly in response to societal and political pressure to moderate content,” Kaplan wrote. “This approach has gone too far. As well-intentioned as many of these efforts have been, they have expanded over time to the point where we are making too many mistakes, frustrating our users and too often getting in the way of the free expression we set out to enable.”
Another upcoming change is that Meta will recommend more political posts. “For a while, the community asked to see less politics because it was making people stressed, so we stopped recommending these posts,” Zuckerberg said. “But it feels like we’re in a new era now, and we’re starting to get feedback that people want to see this content again, so we’re going to start phasing this back into Facebook, Instagram, and Threads while working to keep the communities friendly and positive.”
Jon is a Senior IT Reporter for Ars Technica. He covers the telecom industry, Federal Communications Commission rulemakings, broadband consumer affairs, court cases, and government regulation of the tech industry.
During her tenure, Vestager has repeatedly targeted the world’s biggest tech companies, with some of the toughest actions against tech giants such as Apple, Google, and Microsoft.
The EU Commission on Thursday said Meta is “dominant in the market for personal social networks (…) as well as in the national markets for online display advertising on social media.”
Facebook Marketplace, launched in 2016, is a popular platform to buy and sell second-hand goods, especially household items such as furniture.
Meta has argued that it operates in a highly competitive environment. In a post published on Thursday, the tech giant said marketplaces in Europe continue “to grow and dominate in the EU,” pointing to platforms such as eBay, Leboncoin in France, and Marktplaats in the Netherlands, as “formidable competitors.”
Meta’s fine comes at a period of political transition both in the EU and the US.
Brussels officials have been aggressive both in their rhetoric and their antitrust probes against Big Tech giants as they sought to open markets for local start-ups.
In the past five years, EU regulators have also passed a landmark piece of legislation—the Digital Markets Act—with the aim to slow down dominant tech players and boost the local tech industry.
However, some observers expect the new commission, which is set to start a new 5-year term in weeks, to strike a more conciliatory tone over fears of retaliation from the incoming Trump administration.
The lawsuit was filed by Ethan Zuckerman, a professor at University of Massachusetts Amherst. He feared that Meta might sue to block his tool, Unfollow Everything 2.0, because Meta threatened to sue to block the original tool when it was released by another developer. In May, Zuckerman told Ars that he was “suing Facebook to make it better” and planned to use Section 230’s shield to do it.
Zuckerman’s novel legal theory argued that Congress always intended for Section 230 to protect third-party tools designed to empower users to take control over potentially toxic online environments. In his complaint, Zuckerman tried to convince a US district court in California that:
Section 230(c)(2)(B) immunizes from legal liability “a provider of software or enabling tools that filter, screen, allow, or disallow content that the provider or user considers obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.” Through this provision, Congress intended to promote the development of filtering tools that enable users to curate their online experiences and avoid content they would rather not see.
Digital rights advocates, the Electronic Frontier Foundation (EFF), the Center for Democracy and Technology, and the American Civil Liberties Union of Northern California, supported Zuckerman’s case, urging that the court protect middleware. But on Thursday, Judge Jacqueline Scott Corley granted Meta’s motion to dismiss at a hearing.
Corley has not yet posted her order on the motion to dismiss, but Zuckerman’s lawyers at the Knight Institute confirmed to Ars that their Section 230 argument did not factor into her decision. In a statement, lawyers said that Corley left the door open on the Section 230 claims, and EFF senior staff attorney Sophia Cope, who was at the hearing, told Ars Corley agreed that on “the merits the case raises important issues.”
Facebook, Nvidia ask SCOTUS to narrow legal paths to retrieve investor losses.
The Supreme Court will soon weigh two cases that could potentially make it harder for misled investors to sue Big Tech companies after major scandals.
One case involves one of the largest tech scandals of all time, the Facebook-Cambridge Analytica data breach. In 2019, Facebook agreed to pay “more than $5 billion in civil penalties to settle charges by the Federal Trade Commission (FTC) and the Securities and Exchange Commission (SEC) that it had misled its users and investors over the privacy and security of user data on its platform,” a Supreme Court filing said.
The other case involves an allegation that Nvidia intentionally hid how much of its 2017–2018 GPU demand was due to a volatile cryptocurrency boom and not Nvidia’s core gaming business—allegedly misleading investors ahead of a crypto crash. After the bust, Nvidia suddenly had to slash half a billion dollars from its earnings projection, and market experts later estimated that the firm had understated its crypto-related revenue by more than a billion. In 2022, Nvidia paid a $5.5 million SEC penalty over the inadequate disclosures that one SEC chief said “deprived investors of critical information to evaluate the company’s business in a key market.”
Investors, however, have not yet settled their own legal challenges. In both cases, investors suing convinced the 9th Circuit that the companies were guilty of misleading investors. But now, the tech companies have appealed to the Supreme Court, hoping to reverse those rulings.
In case documents, each claimed that their investors have not satisfied high legal bars, which Nvidia argued Congress designed to prevent “frivolous” or “nuisance” lawsuits from going on “fishing expeditions” to claim securities “fraud by hindsight.” Both warned that SCOTUS upholding the 9th Circuit rulings risked flooding courts with frivolous suits, with Nvidia cautioning that such lawsuits can be “used to injure the entire US economy.”
The Supreme Court will hear arguments in the Facebook case on Wednesday, November 6, then the Nvidia case on November 13.
SCOTUS may be persuaded by tech companies still stuck coping with the aftermath of scandals. A former SEC lawyer, Andrew Feller, told Reuters that the Supreme Court’s conservative majority may continue its “recent track record of handing down business-friendly decisions that narrowed the authority of federal regulators” in these cases. Both cases give justices opportunities to “rein in the power of private plaintiffs to enforce federal rules aimed at punishing corporate misconduct,” Reuters reported.
Facebook defends describing risk as hypothetical
The Facebook case centers on an SEC disclosure where Facebook said that its business may be harmed by a data breach, posing that as a hypothetical, without mentioning the ongoing Cambridge Analytica data breach. Specifically, Facebook wrote, “[a]ny failure to prevent or mitigate . . . improper access to or disclosure of our data or user data . . . could result in the loss or misuse of such data, which could harm our business and reputation and diminish our competitive position.”
Investors felt misled, accusing Facebook of hiding the breach by only presenting the risk as a hypothetical that implied no breach had ever occurred in the past and certainly did not disclose the present risk.
However, in a SCOTUS filing, Facebook insisted that “no reasonable investor would interpret a risk disclosure using probabilistic, forward-looking language as impliedly representing that the specified triggering event had never occurred in the past.”
Facebook is now arguing that SCOTUS agreeing that the company should have disclosed the major data breach “would result in a regime under which companies would be required to disclose every previous material incident they have experienced—effectively creating a sweeping regime of omissions liability.”
According to Facebook, news broke about the Cambridge Analytica data breach in 2015, and its business wasn’t immediately harmed. Following that logic, the social media company hopes that SCOTUS will agree that Facebook was only required to disclose the data breach in its SEC filing if Facebook knew its business would likely be harmed from the ongoing breach.
By affirming the 9th Circuit ruling, Facebook alleged, SCOTUS would be “vastly expanding the circumstances in which risk disclosures are deemed false or misleading,” exposing to legal challenges “a wide range of previously immune forward-looking statements—revenue projections, future business plans or objectives, and the like.”
But investors suing argue that Facebook is still being misleading about the data scandal in its court filings.
“The only reason Facebook has ever given to explain why the misappropriation risked no harm was that the event was allegedly disclosed to the public in 2015 and no one cared,” investors’ SCOTUS brief said. But in 2015, a report exposing a data breach tied to a Ted Cruz campaign was denied by Cambridge Analytica and prompted a Facebook investigation that concluded no damage had been done.
“Facebook actively misled the public about its investigation, ‘represent[ing] that no misconduct had been discovered,'” investors alleged, and “Facebook’s deception extended to its public filings with the SEC.”
According to investors, the real damage was done when the true extent of the Cambridge Analytica scandal was exposed in 2018. That caused substantial revenue losses that Facebook likely understood it was risking while allegedly leaving investors blind to those risks for years.
Investors argue that disclosure should not be required of every data breach that hits Facebook, whether it harms its business or not, but that the Cambridge Analytica data breach was significant and should have been disclosed as a material risk. The 9th Circuit agreed, holding that “publicly treating such a material adverse event as a merely hypothetical prospect can be misleading even if the event has not yet produced follow-on business harm because the company has kept the truth from the public.”
They further argued that requiring so-called overdisclosure wouldn’t trigger unwarranted litigation, as Facebook suggests, because Congress has always “given considerable attention to concerns over abusive private litigation.”
If Facebook wins, investors alleged, SCOTUS risks giving any tech company “a license to intentionally mislead investors about the occurrence of hugely material events by describing those events as purely hypothetical prospects.” Siding with Facebook would allegedly give “companies an incentive to stuff their annual reports with boilerplate, generic warnings that reveal little about the company’s actual business and to cover up events that could give rise to corporate scandals, as Facebook did here.”
Facebook argued that if the SEC is concerned about specific disclosures connected to the data breach, “the SEC can invoke the rulemaking process to impose” a requirement that companies must disclose all “past material adverse events.”
Nvidia disputes expert’s crypto data
While the Facebook case involved a bigger scandal, the Nvidia case could have bigger legal implications if Nvidia wins.
In the Nvidia case, investors argued that Nvidia CEO Jensen Huang made public statements allegedly misleading investors by downplaying the high demand for GPUs tied to volatile crypto markets. To plead their case, investors relied on statements from Nvidia employees, internal documents like meeting slides, industry research, as well as an expert opinion crunching general market numbers and estimating that Nvidia “underreported its crypto revenues by $1.126 billion.”
Nvidia claimed it’s far more plausible that the company simply made an “honest miscalculation” while navigating a complex emerging market.
To defend against the suit, Nvidia is arguing that the Private Securities Litigation Reform Act (PSLRA) imposes “special burdens on plaintiffs seeking to bring federal securities fraud class actions” through “heightened pleading requirements” to deter frivolous lawsuits arguing fraud by hindsight.
According to Nvidia, the PSLRA requires investors to allege particular facts based on particular contents of internal Nvidia documents, which goes beyond relying on an expert opinion. The tech company has urged SCOTUS that the 9th Circuit “‘significantly erode[d]” the PSLRA requirements by allowing Plaintiffs to “simply” hire “an expert who manufactured data to fit their allegations.”
“They hired an expert to create data and then filed a class action alleging that Nvidia and its CEO committed securities fraud by failing to disclose the data invented by Plaintiffs’ expert,” Nvidia argued.
This allegedly “eviscerates the guardrails that Congress erected to protect the public from abusive securities litigation” and creates a “dangerous” and “easy-to-replicate ‘roadmap’ for plaintiffs to sidestep the PSLRA in this recurring context.”
“Far from serving Congress’s goal of guarding against fishing expeditions by vexatious litigants, the Ninth Circuit’s opinion declares it open season so long as a plaintiff has funding to hire an expert,” Nvidia alleged.
Investors are hoping SCOTUS will uphold the 9th Circuit’s judgment. Instead of seeing their suit as frivolous, they argued that the SEC fine over the same misconduct “undermines any suggestion that this is the type of frivolous suit that the PSLRA was meant to screen out.”
They’ve disputed Nvidia’s arguments that they’ve relied solely on a hired expert to support their claims, arguing that each fact was corroborated by employee witnesses and third-party reports.
If Nvidia wins, investors warned, the SCOTUS decision would risk harming a wide range of private securities litigation that Congress has found “‘is an indispensable tool’ for ‘defrauded investors’ to ‘recover their losses without having to rely upon government action.'”
Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.
Given the flood of photorealistic AI-generated images washing over social media networks like X and Facebook these days, we’re seemingly entering a new age of media skepticism: the era of what I’m calling “deep doubt.” While questioning the authenticity of digital content stretches back decades—and analog media long before that—easy access to tools that generate convincing fake content has led to a new wave of liars using AI-generated scenes to deny real documentary evidence. Along the way, people’s existing skepticism toward online content from strangers may be reaching new heights.
Deep doubt is skepticism of real media that stems from the existence of generative AI. This manifests as broad public skepticism toward the veracity of media artifacts, which in turn leads to a notable consequence: People can now more credibly claim that real events did not happen and suggest that documentary evidence was fabricated using AI tools.
The concept behind “deep doubt” isn’t new, but its real-world impact is becoming increasingly apparent. Since the term “deepfake” first surfaced in 2017, we’ve seen a rapid evolution in AI-generated media capabilities. This has led to recent examples of deep doubt in action, such as conspiracy theorists claiming that President Joe Biden has been replaced by an AI-powered hologram and former President Donald Trump’s baseless accusation in August that Vice President Kamala Harris used AI to fake crowd sizes at her rallies. And on Friday, Trump cried “AI” again at a photo of him with E. Jean Carroll, a writer who successfully sued him for sexual assault, that contradicts his claim of never having met her.
Legal scholars Danielle K. Citron and Robert Chesney foresaw this trend years ago, coining the term “liar’s dividend” in 2019 to describe the consequence of deep doubt: deepfakes being weaponized by liars to discredit authentic evidence. But whereas deep doubt was once a hypothetical academic concept, it is now our reality.
The rise of deepfakes, the persistence of doubt
Doubt has been a political weapon since ancient times. This modern AI-fueled manifestation is just the latest evolution of a tactic where the seeds of uncertainty are sown to manipulate public opinion, undermine opponents, and hide the truth. AI is the newest refuge of liars.
Over the past decade, the rise of deep-learning technology has made it increasingly easy for people to craft false or modified pictures, audio, text, or video that appear to be non-synthesized organic media. Deepfakes were named after a Reddit user going by the name “deepfakes,” who shared AI-faked pornography on the service, swapping out the face of a performer with the face of someone else who wasn’t part of the original recording.
In the 20th century, one could argue that a certain part of our trust in media produced by others was a result of how expensive and time-consuming it was, and the skill it required, to produce documentary images and films. Even texts required a great deal of time and skill. As the deep doubt phenomenon grows, it will erode this 20th-century media sensibility. But it will also affect our political discourse, legal systems, and even our shared understanding of historical events that rely on that media to function—we rely on others to get information about the world. From photorealistic images to pitch-perfect voice clones, our perception of what we consider “truth” in media will need recalibration.
In April, a panel of federal judges highlighted the potential for AI-generated deepfakes to not only introduce fake evidence but also cast doubt on genuine evidence in court trials. The concern emerged during a meeting of the US Judicial Conference’s Advisory Committee on Evidence Rules, where the judges discussed the challenges of authenticating digital evidence in an era of increasingly sophisticated AI technology. Ultimately, the judges decided to postpone making any AI-related rule changes, but their meeting shows that the subject is already being considered by American judges.
The Children’s Health Defense (CHD), an anti-vaccine group founded by Robert F. Kennedy Jr, has once again failed to convince a court that Meta acted as a state agent when censoring the group’s posts and ads on Facebook and Instagram.
In his opinion affirming a lower court’s dismissal, US Ninth Circuit Court of Appeals Judge Eric Miller wrote that CHD failed to prove that Meta acted as an arm of the government in censoring posts. Concluding that Meta’s right to censor views that the platforms find “distasteful” is protected by the First Amendment, Miller denied CHD’s requested relief, which had included an injunction and civil monetary damages.
“Meta evidently believes that vaccines are safe and effective and that their use should be encouraged,” Miller wrote. “It does not lose the right to promote those views simply because they happen to be shared by the government.”
CHD told Reuters that the group “was disappointed with the decision and considering its legal options.”
The group first filed the complaint in 2020, arguing that Meta colluded with government officials to censor protected speech by labeling anti-vaccine posts as misleading or removing and shadowbanning CHD posts. This caused CHD’s traffic on the platforms to plummet, CHD claimed, and ultimately, its pages were removed from both platforms.
However, critically, Miller wrote, CHD did not allege that “the government was actually involved in the decisions to label CHD’s posts as ‘false’ or ‘misleading,’ the decision to put the warning label on CHD’s Facebook page, or the decisions to ‘demonetize’ or ‘shadow-ban.'”
“CHD has not alleged facts that allow us to infer that the government coerced Meta into implementing a specific policy,” Miller wrote.
Instead, Meta “was entitled to encourage” various “input from the government,” justifiably seeking vaccine-related information provided by the World Health Organization (WHO) and the US Centers for Disease Control and Prevention (CDC) as it navigated complex content moderation decisions throughout the pandemic, Miller wrote.
Therefore, Meta’s actions against CHD were due to “Meta’s own ‘policy of censoring,’ not any provision of federal law,” Miller concluded. “The evidence suggested that Meta had independent incentives to moderate content and exercised its own judgment in so doing.”
None of CHD’s theories that Meta coordinated with officials to deprive “CHD of its constitutional rights” were plausible, Miller wrote, whereas the “innocent alternative”—”that Meta adopted the policy it did simply because” CEO Mark Zuckerberg and Meta “share the government’s view that vaccines are safe and effective”—appeared “more plausible.”
Meta “does not become an agent of the government just because it decides that the CDC sometimes has a point,” Miller wrote.
Equally not persuasive were CHD’s notions that Section 230 immunity—which shields platforms from liability for third-party content—”‘removed all legal barriers’ to the censorship of vaccine-related speech,” such that “Meta’s restriction of that content should be considered state action.”
“That Section 230 operates in the background to immunize Meta if it chooses to suppress vaccine misinformation—whether because it shares the government’s health concerns or for independent commercial reasons—does not transform Meta’s choice into state action,” Miller wrote.
One judge dissented over Section 230 concerns
In his dissenting opinion, Judge Daniel Collins defended CHD’s Section 230 claim, however, suggesting that the appeals court erred and should have granted CHD injunctive and declaratory relief from alleged censorship. CHD CEO Mary Holland told The Defender that the group was pleased the decision was not unanimous.
According to Collins, who like Miller is a Trump appointee, Meta could never have built its massive social platforms without Section 230 immunity, which grants platforms the ability to broadly censor viewpoints they disfavor.
It was “important to keep in mind” that “the vast practical power that Meta exercises over the speech of millions of others ultimately rests on a government-granted privilege to which Meta is not constitutionally entitled,” Collins wrote. And this power “makes a crucial difference in the state-action analysis.”
As Collins sees it, CHD could plausibly allege that Meta’s communications with government officials about vaccine-related misinformation targeted specific users, like the “disinformation dozen” that includes both CHD and Kennedy. In that case, it appears possible to Collins that Section 230 provides a potential opportunity for government to target speech that it disfavors through mechanisms provided by the platforms.
“Having specifically and purposefully created an immunized power for mega-platform operators to freely censor the speech of millions of persons on those platforms, the Government is perhaps unsurprisingly tempted to then try to influence particular uses of such dangerous levers against protected speech expressing viewpoints the Government does not like,” Collins warned.
He further argued that “Meta’s relevant First Amendment rights” do not “give Meta an unbounded freedom to work with the Government in suppressing speech on its platforms.” Disagreeing with the majority, he wrote that “in this distinctive scenario, applying the state-action doctrine promotes individual liberty by keeping the Government’s hands away from the tempting levers of censorship on these vast platforms.”
The majority agreed, however, that while Section 230 immunity “is undoubtedly a significant benefit to companies like Meta,” lawmakers’ threats to weaken Section 230 did not suggest that Meta’s anti-vaccine policy was coerced state action.
“Many companies rely, in one way or another, on a favorable regulatory environment or the goodwill of the government,” Miller wrote. “If that were enough for state action, every large government contractor would be a state actor. But that is not the law.”
Surprisingly, this step wasn’t taken under laws like the Digital Services Act (DSA), the Digital Markets Act (DMA), or the General Data Protection Regulation (GDPR).
Instead, the EC announced Monday that Meta risked sanctions under EU consumer laws if it could not resolve key concerns about Meta’s so-called “pay or consent” model.
Meta’s model is seemingly problematic, the commission said, because Meta “requested consumers overnight to either subscribe to use Facebook and Instagram against a fee or to consent to Meta’s use of their personal data to be shown personalized ads, allowing Meta to make revenue out of it.”
Because users were given such short notice, they may have been “exposed to undue pressure to choose rapidly between the two models, fearing that they would instantly lose access to their accounts and their network of contacts,” the EC said.
To protect consumers, the EC joined national consumer protection authorities, sending a letter to Meta requiring the tech giant to propose solutions to resolve the commission’s biggest concerns by September 1.
That Meta’s “pay or consent” model may be “misleading” is a top concern because it uses the term “free” for ad-based plans, even though Meta “can make revenue from using their personal data to show them personalized ads.” It seems that while Meta does not consider giving away personal information to be a cost to users, the EC’s commissioner for justice, Didier Reynders, apparently does.
“Consumers must not be lured into believing that they would either pay and not be shown any ads anymore, or receive a service for free, when, instead, they would agree that the company used their personal data to make revenue with ads,” Reynders said. “EU consumer protection law is clear in this respect. Traders must inform consumers upfront and in a fully transparent manner on how they use their personal data. This is a fundamental right that we will protect.”
Additionally, the EC is concerned that Meta users might be confused about how “to navigate through different screens in the Facebook/Instagram app or web-version and to click on hyperlinks directing them to different parts of the Terms of Service or Privacy Policy to find out how their preferences, personal data, and user-generated data will be used by Meta to show them personalized ads.” They may also find Meta’s “imprecise terms and language” confusing, such as Meta referring to “your info” instead of clearly referring to consumers’ “personal data.”
To resolve the EC’s concerns, Meta may have to give EU users more time to decide if they want to pay to subscribe or consent to personal data collection for targeted ads. Or Meta may have to take more drastic steps by altering language and screens used when securing consent to collect data or potentially even scrapping its “pay or consent” model entirely, as pressure in the EU mounts.
“Subscriptions as an alternative to advertising are a well-established business model across many industries,” Meta’s spokesperson told Ars. “Subscription for no ads follows the direction of the highest court in Europe and we are confident it complies with European regulation.”
Meta’s model is “sneaky,” EC said
Since last year, the social media company has argued that its “subscription for no ads” model was “endorsed” by the highest court in Europe, the Court of Justice of the European Union (CJEU).
However, privacy advocates have noted that this alleged endorsement came following a CJEU case under the GDPR and was only presented as a hypothetical, rather than a formal part of the ruling, as Meta seems to interpret.
What the CJEU said was that “users must be free to refuse individually”—”in the context of” signing up for services—”to give their consent to particular data processing operations not necessary” for Meta to provide such services “without being obliged to refrain entirely from using the service.” That “means that those users are to be offered, if necessary for an appropriate fee, an equivalent alternative not accompanied by such data processing operations,” the CJEU said.
The nuance here may matter when it comes to Meta’s proposed solutions even if the EC accepts the CJEU’s suggestion of an acceptable alternative as setting some sort of legal precedent. Because the consumer protection authorities raised the action due to Meta suddenly changing the consent model for existing users—not “in the context of” signing up for services—Meta may struggle to persuade the EC that existing users weren’t misled and pressured into paying for a subscription or consenting to ads, given how fast Meta’s policy shifted.
Meta risks sanctions if a compromise can’t be reached, the EC said. Under the EU’s Unfair Contract Terms Directive, for example, Meta could be fined up to 4 percent of its annual turnover if consumer protection authorities are unsatisfied with Meta’s proposed solutions.
The EC’s vice president for values and transparency, Věra Jourová, provided a statement in the press release, calling Meta’s abrupt introduction of the “pay or consent” model “sneaky.”
“We are proud of our strong consumer protection laws which empower Europeans to have the right to be accurately informed about changes such as the one proposed by Meta,” Jourová said. “In the EU, consumers are able to make truly informed choices and we now take action to safeguard this right.”
This week, Meta asked a US district court in California to toss a lawsuit filed by a professor, Ethan Zuckerman, who fears that Meta will sue him if he releases a tool that would give Facebook users an automated way to easily remove all content from their feeds.
Zuckerman has alleged that the imminent threat of a lawsuit from Meta has prevented him from releasing Unfollow Everything 2.0, suggesting that a cease-and-desist letter sent to the creator of the original Unfollow Everything substantiates his fears.
He’s hoping the court will find that either releasing his tool would not breach Facebook’s terms of use—which prevent “accessing or collecting data from Facebook ‘using automated means'”—or that those terms conflict with public policy. Among laws that Facebook’s terms allegedly conflict with are the First Amendment, section 230 of the Communications Decency Act, the Computer Fraud and Abuse Act (CFAA), as well as California’s Computer Data Access and Fraud Act (CDAFA) and state privacy laws.
But Meta claimed in its motion to dismiss that Zuckerman’s suit is too premature, mostly because the tool has not yet been built and Meta has not had a chance to review the “non-existent tool” to determine how Unfollow Everything 2.0 might impact its platform or its users.
“Besides bald assertions about how Plaintiff intends Unfollow Everything 2.0 to work and what he plans to do with it, there are no concrete facts that would enable this Court to adjudicate potential legal claims regarding this tool—which, at present, does not even operate in the real world,” Meta argued.
Meta wants all of Zuckerman’s claims to be dismissed, arguing that “adjudicating Plaintiff’s claims would require needless rulings on hypothetical applications of California law, would likely result in duplicative litigation, and would encourage forum shopping.”
At the heart of Meta’s defense is a claim that there’s no telling yet if Zuckerman will ever be able to release the tool, although Zuckerman said he was prepared to finish the build within six weeks of a court win. Last May, Zuckerman told Ars that because Facebook’s functionality could change while the lawsuit is settled, it’s better to wait to finish building the tool because Facebook’s design is always changing.
Meta claimed that Zuckerman can’t confirm if Unfollow Everything 2.0 would work as described in his suit precisely because his findings are based on Facebook’s current interface, and the “process for unfollowing has changed over time and will likely continue to change.”
Further, Meta argued that the original Unfollow Everything performed in a different way—by logging in on behalf of users and automatically unfollowing everything, rather than performing the automated unfollowing when the users themselves log in. Because of that, Meta argued that the new tool may not prompt the same response from Meta.
A senior staff attorney at the Knight Institute who helped draft Zuckerman’s complaint, Ramya Krishnan, told Ars that the two tools operate nearly identically, however.
“Professor Zuckerman’s tool and the original Unfollow Everything work in essentially the same way,” Krishnan told Ars. “They automatically unfollow all of a user’s friends, groups, and pages after the user installs the tool and logs in to Facebook using their web browser.”
Ultimately, Meta claimed that there’s no telling if Meta would even sue over the tool’s automated access to user data, dismissing Zuckerman’s fears as unsubstantiated.
Only when the tool is out in the wild and Facebook is able to determine “actual, concrete facts about how it works in practice” that “may prove problematic” will Meta know if a legal response is needed, Meta claimed. Without reviewing the technical specs, Meta argued, Meta has no way to assess the damages or know if it would sue over a breach of contract, as alleged, or perhaps over other claims not alleged, such as trademark infringement.
On Wednesday, the Supreme Court tossed out claims that the Biden administration coerced social media platforms into censoring users by removing COVID-19 and election-related content.
Complaints alleging that high-ranking government officials were censoring conservatives had previously convinced a lower court to order an injunction limiting the Biden administration’s contacts with platforms. But now that injunction has been overturned, re-opening lines of communication just ahead of the 2024 elections—when officials will once again be closely monitoring the spread of misinformation online targeted at voters.
In a 6–3 vote, the majority ruled that none of the plaintiffs suing—including five social media users and Republican attorneys general in Louisiana and Missouri—had standing. They had alleged that the government had “pressured the platforms to censor their speech in violation of the First Amendment,” demanding an injunction to stop any future censorship.
Plaintiffs may have succeeded if they were instead seeking damages for past harms. But in her opinion, Justice Amy Coney Barrett wrote that partly because the Biden administration seemingly stopped influencing platforms’ content policies in 2022, none of the plaintiffs could show evidence of a “substantial risk that, in the near future, they will suffer an injury that is traceable” to any government official. Thus, they did not seem to face “a real and immediate threat of repeated injury,” Barrett wrote.
“Without proof of an ongoing pressure campaign, it is entirely speculative that the platforms’ future moderation decisions will be attributable, even in part,” to government officials, Barrett wrote, finding that an injunction would do little to prevent future censorship.
Instead, plaintiffs’ claims “depend on the platforms’ actions,” Barrett emphasized, “yet the plaintiffs do not seek to enjoin the platforms from restricting any posts or accounts.”
“It is a bedrock principle that a federal court cannot redress ‘injury that results from the independent action of some third party not before the court,'” Barrett wrote.
Barrett repeatedly noted “weak” arguments raised by plaintiffs, none of which could directly link their specific content removals with the Biden administration’s pressure campaign urging platforms to remove vaccine or election misinformation.
According to Barrett, the lower court initially granting the injunction “glossed over complexities in the evidence,” including the fact that “platforms began to suppress the plaintiffs’ COVID-19 content” before the government pressure campaign began. That’s an issue, Barrett said, because standing to sue “requires a threshold showing that a particular defendant pressured a particular platform to censor a particular topic before that platform suppressed a particular plaintiff’s speech on that topic.”
“While the record reflects that the Government defendants played a role in at least some of the platforms’ moderation choices, the evidence indicates that the platforms had independent incentives to moderate content and often exercised their own judgment,” Barrett wrote.
Barrett was similarly unconvinced by arguments that plaintiffs risk platforms removing future content based on stricter moderation policies that were previously coerced by officials.
“Without evidence of continued pressure from the defendants, the platforms remain free to enforce, or not to enforce, their policies—even those tainted by initial governmental coercion,” Barrett wrote.
Judge: SCOTUS “shirks duty” to defend free speech
Justices Clarence Thomas and Neil Gorsuch joined Samuel Alito in dissenting, arguing that “this is one of the most important free speech cases to reach this Court in years” and that the Supreme Court had an “obligation” to “tackle the free speech issue that the case presents.”
“The Court, however, shirks that duty and thus permits the successful campaign of coercion in this case to stand as an attractive model for future officials who want to control what the people say, hear, and think,” Alito wrote.
Alito argued that the evidence showed that while “downright dangerous” speech was suppressed, so was “valuable speech.” He agreed with the lower court that “a far-reaching and widespread censorship campaign” had been “conducted by high-ranking federal officials against Americans who expressed certain disfavored views about COVID-19 on social media.”
“For months, high-ranking Government officials placed unrelenting pressure on Facebook to suppress Americans’ free speech,” Alito wrote. “Because the Court unjustifiably refuses to address this serious threat to the First Amendment, I respectfully dissent.”
At least one plaintiff who opposed masking and vaccines, Jill Hines, was “indisputably injured,” Alito wrote, arguing that evidence showed that she was censored more frequently after officials pressured Facebook into changing their policies.
“Top federal officials continuously and persistently hectored Facebook to crack down on what the officials saw as unhelpful social media posts, including not only posts that they thought were false or misleading but also stories that they did not claim to be literally false but nevertheless wanted obscured,” Alito wrote.
While Barrett and the majority found that platforms were more likely responsible for injury, Alito disagreed, writing that with the threat of antitrust probes or Section 230 amendments, Facebook acted like “a subservient entity determined to stay in the good graces of a powerful taskmaster.”
Alito wrote that the majority was “applying a new and heightened standard” by requiring plaintiffs to “untangle Government-caused censorship from censorship that Facebook might have undertaken anyway.” In his view, it was enough that Hines showed that “one predictable effect of the officials’ action was that Facebook would modify its censorship policies in a way that affected her.”
“When the White House pressured Facebook to amend some of the policies related to speech in which Hines engaged, those amendments necessarily impacted some of Facebook’s censorship decisions,” Alito wrote. “Nothing more is needed. What the Court seems to want are a series of ironclad links.”