Anticipating that 2025 will be an “intense year” requiring rapid innovation, Mark Zuckerberg reportedly announced that Meta would be cutting 5 percent of its workforce—targeting “lowest performers.”
Bloomberg reviewed the internal memo explaining the cuts, which was posted to Meta’s internal Workplace forum Tuesday. In it, Zuckerberg confirmed that Meta was shifting its strategy to “move out low performers faster” so that Meta can hire new talent to fill those vacancies this year.
“I’ve decided to raise the bar on performance management,” Zuckerberg said. “We typically manage out people who aren’t meeting expectations over the course of a year, but now we’re going to do more extensive performance-based cuts during this cycle.”
Cuts will likely impact more than 3,600 employees, as Meta’s most recent headcount in September totaled about 72,000 employees. It may not be as straightforward as letting go anyone with an unsatisfactory performance review, as Zuckerberg said that any employee not currently meeting expectations could be spared if Meta is “optimistic about their future performance,” The Wall Street Journal reported.
Any employees affected will be notified by February 10 and receive “generous severance,” Zuckerberg’s memo promised.
This is the biggest round of cuts at Meta since 2023, when Meta laid off 10,000 employees during what Zuckerberg dubbed the “year of efficiency.” Those layoffs followed a prior round where 11,000 lost their jobs and Zuckerberg realized that “leaner is better.” He told employees in 2023 that a “surprising result” from reducing the workforce was “that many things have gone faster.”
“A leaner org will execute its highest priorities faster,” Zuckerberg wrote in 2023. “People will be more productive, and their work will be more fun and fulfilling. We will become an even greater magnet for the most talented people. That’s why in our Year of Efficiency, we are focused on canceling projects that are duplicative or lower priority and making every organization as lean as possible.”
And perhaps in a nod to Meta’s recent changes, Mastodon also vowed to “invest deeply in trust and safety” and ensure “everyone, especially marginalized communities,” feels “safe” on the platform.
To become a more user-focused paradise of “resilient, governable, open and safe digital spaces,” Mastodon is going to need a lot more funding. The blog called for donations to help fund an annual operating budget of $5.1 million (5 million euros) in 2025. That’s a massive leap from the $152,476 (149,400 euros) total operating expenses Mastodon reported in 2023.
Other social networks wary of EU regulations
Mastodon has decided to continue basing its operations in Europe, while still maintaining a separate US-based nonprofit entity as a “fundraising hub,” the blog said.
It will take time, Mastodon said, to “select the appropriate jurisdiction and structure in Europe” before Mastodon can then “determine which other (subsidiary) legal structures are needed to support operations and sustainability.”
While Mastodon is carefully getting re-settled as a nonprofit in Europe, Zuckerberg this week went on Joe Rogan’s podcast to call on Donald Trump to help US tech companies fight European Union fines, Politico reported.
Experts told France24 that EU officials may “perhaps wrongly” already be fearful about ruffling Trump’s feathers by targeting his tech allies and would likely need to use the “full legal arsenal” of EU digital laws to “stand up to Big Tech” once Trump’s next term starts.
As Big Tech prepares to continue battling EU regulators, Mastodon appears to be taking a different route, laying roots in Europe and “establishing the appropriate governance and leadership frameworks that reflect the nature and purpose of Mastodon as a whole” and “responsibly serve the community,” its blog said.
“Our core mission remains the same: to create the tools and digital spaces where people can build authentic, constructive online communities free from ads, data exploitation, manipulative algorithms, or corporate monopolies,” Mastodon’s blog said.
Meta has reportedly ended diversity, equity, and inclusion (DEI) programs that influenced staff hiring and training, as well as vendor decisions, effective immediately.
According to an internal memo viewed by Axios and verified by Ars, Meta’s vice president of human resources, Janelle Gale, told Meta employees that the shift was due to “legal and policy landscape surrounding diversity, equity, and inclusion efforts in the United States is changing.”
It’s another move by Meta that some view as part of the company’s larger effort to align with the incoming Trump administration’s politics. In December, Donald Trump promised to crack down on DEI initiatives at companies and on college campuses, The Guardian reported.
Earlier this week, Meta cut its fact-checking program, which was introduced in 2016 after Trump’s first election to prevent misinformation from spreading. In a statement announcing Meta’s pivot to X’s Community Notes-like approach to fact-checking, Meta CEO Mark Zuckerberg claimed that fact-checkers were “too politically biased” and “destroyed trust” on Meta platforms like Facebook, Instagram, and Threads.
Trump has also long promised to renew his war on alleged social media censorship while in office. Meta faced backlash this week over leaked rule changes relaxing Meta’s hate speech policies, The Intercept reported, which Zuckerberg said were “out of touch with mainstream discourse.” Those changes included allowing anti-trans slurs previously banned, as well as permitting women to be called “property” and gay people to be called “mentally ill,” Mashable reported. In a statement, GLAAD said that rolling back safety guardrails risked turning Meta platforms into “unsafe landscapes filled with dangerous hate speech, violence, harassment, and misinformation” and alleged that Meta appeared to be willing to “normalize anti-LGBTQ hatred for profit.”
Zuckerberg says Meta will “work with President Trump” to fight censorship.
Meta CEO Mark Zuckerberg during the Meta Connect event in Menlo Park, California on September 25, 2024. Credit: Getty Images | Bloomberg
Meta announced today that it’s ending the third-party fact-checking program it introduced in 2016, and will rely instead on a Community Notes approach similar to what’s used on Elon Musk’s X platform.
The end of third-party fact-checking and related changes to Meta policies could help the company make friends in the Trump administration and in governments of conservative-leaning states that have tried to impose legal limits on content moderation. The operator of Facebook and Instagram announced the changes in a blog post and a video message recorded by CEO Mark Zuckerberg.
“Governments and legacy media have pushed to censor more and more. A lot of this is clearly political,” Zuckerberg said. He said the recent elections “feel like a cultural tipping point toward once again prioritizing speech.”
“We’re going to get rid of fact-checkers and replace them with Community Notes, similar to X, starting in the US,” Zuckerberg said. “After Trump first got elected in 2016, the legacy media wrote nonstop about how misinformation was a threat to democracy. We tried in good faith to address those concerns without becoming the arbiters of truth. But the fact-checkers have just been too politically biased and have destroyed more trust than they’ve created, especially in the US.”
Meta says the soon-to-be-discontinued fact-checking program includes over 90 third-party organizations that evaluate posts in over 60 languages. The US-based fact-checkers are AFP USA, Check Your Fact, Factcheck.org, Lead Stories, PolitiFact, Science Feedback, Reuters Fact Check, TelevisaUnivision, The Dispatch, and USA Today.
The independent fact-checkers rate the accuracy of posts and apply ratings such as False, Altered, Partly False, Missing Context, Satire, and True. Meta adds notices to posts rated as false or misleading and notifies users before they try to share the content or if they shared it in the past.
Meta: Experts “have their own biases”
In the blog post that accompanied Zuckerberg’s video message, Chief Global Affairs Officer Joel Kaplan said the 2016 decision to use independent fact-checkers seemed like “the best and most reasonable choice at the time… The intention of the program was to have these independent experts give people more information about the things they see online, particularly viral hoaxes, so they were able to judge for themselves what they saw and read.”
But experts “have their own biases and perspectives,” and the program imposed “intrusive labels and reduced distribution” of content “that people would understand to be legitimate political speech and debate,” Kaplan wrote.
The X-style Community Notes system lets the community “decide when posts are potentially misleading and need more context, and people across a diverse range of perspectives decide what sort of context is helpful for other users to see… Just like they do on X, Community Notes [on Meta sites] will require agreement between people with a range of perspectives to help prevent biased ratings,” Kaplan wrote.
The end of third-party fact-checking will be implemented in the US before other countries. Meta will also move its internal trust and safety and content moderation teams out of California, Zuckerberg said. “Our US-based content review is going to be based in Texas. As we work to promote free expression, I think it will help us build trust to do this work in places where there is less concern about the bias of our teams,” he said. Meta will continue to take “legitimately bad stuff” like drugs, terrorism, and child exploitation “very seriously,” Zuckerberg said.
Zuckerberg pledges to work with Trump
Meta will “phase in a more comprehensive community notes system” over the next couple of months, Zuckerberg said. Meta, which donated $1 million to Trump’s inaugural fund, will also “work with President Trump to push back on governments around the world that are going after American companies and pushing to censor more,” Zuckerberg said.
Zuckerberg said that “Europe has an ever-increasing number of laws institutionalizing censorship,” that “Latin American countries have secret courts that can quietly order companies to take things down,” and that “China has censored apps from even working in the country.” Meta needs “the support of the US government” to push back against other countries’ content-restriction orders, he said.
“That’s why it’s been so difficult over the past four years when even the US government has pushed for censorship,” Zuckerberg said, referring to the Biden administration. “By going after US and other American companies, it has emboldened other governments to go even further. But now we have the opportunity to restore free expression, and I am excited to take it.”
Brendan Carr, Trump’s pick to lead the Federal Communications Commission, praised Meta’s policy changes. Carr has promised to shift the FCC’s focus from regulating telecom companies to cracking down on Big Tech and media companies that he alleges are part of a “censorship cartel.”
“President Trump’s resolute and strong support for the free speech rights of everyday Americans is already paying dividends,” Carr wrote on X today. “Facebook’s announcements is [sic] a good step in the right direction. I look forward to monitoring these developments and their implementation. The work continues until the censorship cartel is completely dismantled and destroyed.”
Group: Meta is “saying the truth doesn’t matter”
Meta’s changes were criticized by Public Citizen, a nonprofit advocacy group founded by Ralph Nader. “Asking users to fact-check themselves is tantamount to Meta saying the truth doesn’t matter,” Public Citizen co-president Lisa Gilbert said. “Misinformation will flow more freely with this policy change, as we cannot assume that corrections will be made when false information proliferates. The American people deserve accurate information about our elections, health risks, the environment, and much more.”
Media advocacy group Free Press said that “Zuckerberg is one of many billionaires who are cozying up to dangerous demagogues like Trump and pushing initiatives that favor their bottom lines at the expense of everything and everyone else.” Meta appears to be abandoning its “responsibility to protect its many users, and align[ing] the company more closely with an incoming president who’s a known enemy of accountability,” Free Press Senior Counsel Nora Benavidez said.
X’s Community Notes system was criticized in a recent report by the Center for Countering Digital Hate (CCDH), which said it “found that 74 percent of accurate community notes on US election misinformation never get shown to users.” (X previously sued the CCDH, but the lawsuit was dismissed by a federal judge.)
Previewing other changes, Zuckerberg said that Meta will eliminate content restrictions “that are just out of touch with mainstream discourse” and change how it enforces policies “to reduce the mistakes that account for the vast majority of censorship on our platforms.”
“We used to have filters that scanned for any policy violation. Now, we’re going to focus those filters on tackling illegal and high-severity violations, and for lower severity violations, we’re going to rely on someone reporting an issue before we take action,” he said. “The problem is the filters make mistakes, and they take down a lot of content that they shouldn’t. So by dialing them back, we’re going to dramatically reduce the amount of censorship on our platforms.”
Meta to relax filters, recommend more political content
Zuckerberg said Meta will re-tune content filters “to require much higher confidence before taking down content.” He said this means Meta will “catch less bad stuff” but will “also reduce the number of innocent people’s posts and accounts that we accidentally take down.”
Meta has “built a lot of complex systems to moderate content,” he noted. Even if these systems “accidentally censor just 1 percent of posts, that’s millions of people, and we’ve reached a point where it’s just too many mistakes and too much censorship,” he said.
Kaplan wrote that Meta has censored too much harmless content and that “too many people find themselves wrongly locked up in ‘Facebook jail.'”
“In recent years we’ve developed increasingly complex systems to manage content across our platforms, partly in response to societal and political pressure to moderate content,” Kaplan wrote. “This approach has gone too far. As well-intentioned as many of these efforts have been, they have expanded over time to the point where we are making too many mistakes, frustrating our users and too often getting in the way of the free expression we set out to enable.”
Another upcoming change is that Meta will recommend more political posts. “For a while, the community asked to see less politics because it was making people stressed, so we stopped recommending these posts,” Zuckerberg said. “But it feels like we’re in a new era now, and we’re starting to get feedback that people want to see this content again, so we’re going to start phasing this back into Facebook, Instagram, and Threads while working to keep the communities friendly and positive.”
Jon is a Senior IT Reporter for Ars Technica. He covers the telecom industry, Federal Communications Commission rulemakings, broadband consumer affairs, court cases, and government regulation of the tech industry.
The Children’s Health Defense (CHD), an anti-vaccine group founded by Robert F. Kennedy Jr, has once again failed to convince a court that Meta acted as a state agent when censoring the group’s posts and ads on Facebook and Instagram.
In his opinion affirming a lower court’s dismissal, US Ninth Circuit Court of Appeals Judge Eric Miller wrote that CHD failed to prove that Meta acted as an arm of the government in censoring posts. Concluding that Meta’s right to censor views that the platforms find “distasteful” is protected by the First Amendment, Miller denied CHD’s requested relief, which had included an injunction and civil monetary damages.
“Meta evidently believes that vaccines are safe and effective and that their use should be encouraged,” Miller wrote. “It does not lose the right to promote those views simply because they happen to be shared by the government.”
CHD told Reuters that the group “was disappointed with the decision and considering its legal options.”
The group first filed the complaint in 2020, arguing that Meta colluded with government officials to censor protected speech by labeling anti-vaccine posts as misleading or removing and shadowbanning CHD posts. This caused CHD’s traffic on the platforms to plummet, CHD claimed, and ultimately, its pages were removed from both platforms.
However, critically, Miller wrote, CHD did not allege that “the government was actually involved in the decisions to label CHD’s posts as ‘false’ or ‘misleading,’ the decision to put the warning label on CHD’s Facebook page, or the decisions to ‘demonetize’ or ‘shadow-ban.'”
“CHD has not alleged facts that allow us to infer that the government coerced Meta into implementing a specific policy,” Miller wrote.
Instead, Meta “was entitled to encourage” various “input from the government,” justifiably seeking vaccine-related information provided by the World Health Organization (WHO) and the US Centers for Disease Control and Prevention (CDC) as it navigated complex content moderation decisions throughout the pandemic, Miller wrote.
Therefore, Meta’s actions against CHD were due to “Meta’s own ‘policy of censoring,’ not any provision of federal law,” Miller concluded. “The evidence suggested that Meta had independent incentives to moderate content and exercised its own judgment in so doing.”
None of CHD’s theories that Meta coordinated with officials to deprive “CHD of its constitutional rights” were plausible, Miller wrote, whereas the “innocent alternative”—”that Meta adopted the policy it did simply because” CEO Mark Zuckerberg and Meta “share the government’s view that vaccines are safe and effective”—appeared “more plausible.”
Meta “does not become an agent of the government just because it decides that the CDC sometimes has a point,” Miller wrote.
Equally not persuasive were CHD’s notions that Section 230 immunity—which shields platforms from liability for third-party content—”‘removed all legal barriers’ to the censorship of vaccine-related speech,” such that “Meta’s restriction of that content should be considered state action.”
“That Section 230 operates in the background to immunize Meta if it chooses to suppress vaccine misinformation—whether because it shares the government’s health concerns or for independent commercial reasons—does not transform Meta’s choice into state action,” Miller wrote.
One judge dissented over Section 230 concerns
In his dissenting opinion, Judge Daniel Collins defended CHD’s Section 230 claim, however, suggesting that the appeals court erred and should have granted CHD injunctive and declaratory relief from alleged censorship. CHD CEO Mary Holland told The Defender that the group was pleased the decision was not unanimous.
According to Collins, who like Miller is a Trump appointee, Meta could never have built its massive social platforms without Section 230 immunity, which grants platforms the ability to broadly censor viewpoints they disfavor.
It was “important to keep in mind” that “the vast practical power that Meta exercises over the speech of millions of others ultimately rests on a government-granted privilege to which Meta is not constitutionally entitled,” Collins wrote. And this power “makes a crucial difference in the state-action analysis.”
As Collins sees it, CHD could plausibly allege that Meta’s communications with government officials about vaccine-related misinformation targeted specific users, like the “disinformation dozen” that includes both CHD and Kennedy. In that case, it appears possible to Collins that Section 230 provides a potential opportunity for government to target speech that it disfavors through mechanisms provided by the platforms.
“Having specifically and purposefully created an immunized power for mega-platform operators to freely censor the speech of millions of persons on those platforms, the Government is perhaps unsurprisingly tempted to then try to influence particular uses of such dangerous levers against protected speech expressing viewpoints the Government does not like,” Collins warned.
He further argued that “Meta’s relevant First Amendment rights” do not “give Meta an unbounded freedom to work with the Government in suppressing speech on its platforms.” Disagreeing with the majority, he wrote that “in this distinctive scenario, applying the state-action doctrine promotes individual liberty by keeping the Government’s hands away from the tempting levers of censorship on these vast platforms.”
The majority agreed, however, that while Section 230 immunity “is undoubtedly a significant benefit to companies like Meta,” lawmakers’ threats to weaken Section 230 did not suggest that Meta’s anti-vaccine policy was coerced state action.
“Many companies rely, in one way or another, on a favorable regulatory environment or the goodwill of the government,” Miller wrote. “If that were enough for state action, every large government contractor would be a state actor. But that is not the law.”
Surprisingly, this step wasn’t taken under laws like the Digital Services Act (DSA), the Digital Markets Act (DMA), or the General Data Protection Regulation (GDPR).
Instead, the EC announced Monday that Meta risked sanctions under EU consumer laws if it could not resolve key concerns about Meta’s so-called “pay or consent” model.
Meta’s model is seemingly problematic, the commission said, because Meta “requested consumers overnight to either subscribe to use Facebook and Instagram against a fee or to consent to Meta’s use of their personal data to be shown personalized ads, allowing Meta to make revenue out of it.”
Because users were given such short notice, they may have been “exposed to undue pressure to choose rapidly between the two models, fearing that they would instantly lose access to their accounts and their network of contacts,” the EC said.
To protect consumers, the EC joined national consumer protection authorities, sending a letter to Meta requiring the tech giant to propose solutions to resolve the commission’s biggest concerns by September 1.
That Meta’s “pay or consent” model may be “misleading” is a top concern because it uses the term “free” for ad-based plans, even though Meta “can make revenue from using their personal data to show them personalized ads.” It seems that while Meta does not consider giving away personal information to be a cost to users, the EC’s commissioner for justice, Didier Reynders, apparently does.
“Consumers must not be lured into believing that they would either pay and not be shown any ads anymore, or receive a service for free, when, instead, they would agree that the company used their personal data to make revenue with ads,” Reynders said. “EU consumer protection law is clear in this respect. Traders must inform consumers upfront and in a fully transparent manner on how they use their personal data. This is a fundamental right that we will protect.”
Additionally, the EC is concerned that Meta users might be confused about how “to navigate through different screens in the Facebook/Instagram app or web-version and to click on hyperlinks directing them to different parts of the Terms of Service or Privacy Policy to find out how their preferences, personal data, and user-generated data will be used by Meta to show them personalized ads.” They may also find Meta’s “imprecise terms and language” confusing, such as Meta referring to “your info” instead of clearly referring to consumers’ “personal data.”
To resolve the EC’s concerns, Meta may have to give EU users more time to decide if they want to pay to subscribe or consent to personal data collection for targeted ads. Or Meta may have to take more drastic steps by altering language and screens used when securing consent to collect data or potentially even scrapping its “pay or consent” model entirely, as pressure in the EU mounts.
“Subscriptions as an alternative to advertising are a well-established business model across many industries,” Meta’s spokesperson told Ars. “Subscription for no ads follows the direction of the highest court in Europe and we are confident it complies with European regulation.”
Meta’s model is “sneaky,” EC said
Since last year, the social media company has argued that its “subscription for no ads” model was “endorsed” by the highest court in Europe, the Court of Justice of the European Union (CJEU).
However, privacy advocates have noted that this alleged endorsement came following a CJEU case under the GDPR and was only presented as a hypothetical, rather than a formal part of the ruling, as Meta seems to interpret.
What the CJEU said was that “users must be free to refuse individually”—”in the context of” signing up for services—”to give their consent to particular data processing operations not necessary” for Meta to provide such services “without being obliged to refrain entirely from using the service.” That “means that those users are to be offered, if necessary for an appropriate fee, an equivalent alternative not accompanied by such data processing operations,” the CJEU said.
The nuance here may matter when it comes to Meta’s proposed solutions even if the EC accepts the CJEU’s suggestion of an acceptable alternative as setting some sort of legal precedent. Because the consumer protection authorities raised the action due to Meta suddenly changing the consent model for existing users—not “in the context of” signing up for services—Meta may struggle to persuade the EC that existing users weren’t misled and pressured into paying for a subscription or consenting to ads, given how fast Meta’s policy shifted.
Meta risks sanctions if a compromise can’t be reached, the EC said. Under the EU’s Unfair Contract Terms Directive, for example, Meta could be fined up to 4 percent of its annual turnover if consumer protection authorities are unsatisfied with Meta’s proposed solutions.
The EC’s vice president for values and transparency, Věra Jourová, provided a statement in the press release, calling Meta’s abrupt introduction of the “pay or consent” model “sneaky.”
“We are proud of our strong consumer protection laws which empower Europeans to have the right to be accurately informed about changes such as the one proposed by Meta,” Jourová said. “In the EU, consumers are able to make truly informed choices and we now take action to safeguard this right.”
Meta has apparently paused plans to process mounds of user data to bring new AI experiences to Europe.
The decision comes after data regulators rebuffed the tech giant’s claims that it had “legitimate interests” in processing European Union- and European Economic Area (EEA)-based Facebook and Instagram users’ data—including personal posts and pictures—to train future AI tools.
There’s not much information available yet on Meta’s decision. But Meta’s EU regulator, the Irish Data Protection Commission (DPC), posted a statement confirming that Meta made the move after ongoing discussions with the DPC about compliance with the EU’s strict data privacy laws, including the General Data Protection Regulation (GDPR).
“The DPC welcomes the decision by Meta to pause its plans to train its large language model using public content shared by adults on Facebook and Instagram across the EU/EEA,” the DPC said. “This decision followed intensive engagement between the DPC and Meta. The DPC, in co-operation with its fellow EU data protection authorities, will continue to engage with Meta on this issue.”
The European Center for Digital Rights, known as Noyb, had filed 11 complaints across the EU and intended to file more to stop Meta from moving forward with its AI plans. The DPC initially gave Meta AI the green light to proceed but has now made a U-turn, Noyb said.
Meta’s policy still requires update
In a blog, Meta had previously teased new AI features coming to the EU, including everything from customized stickers for chats and stories to Meta AI, a “virtual assistant you can access to answer questions, generate images, and more.” Meta had argued that training on EU users’ personal data was necessary so that AI services could reflect “the diverse cultures and languages of the European communities who will use them.”
Before the pause, the company had been hoping to rely “on the legal basis of ‘legitimate interests’” to process the data, because it’s needed “to improve AI at Meta.” But Noyb and EU data regulators had argued that Meta’s legal basis did not comply with the GDPR, with the Norwegian Data Protection Authority arguing that “the most natural thing would have been to ask the users for their consent before their posts and images are used in this way.”
Rather than ask for consent, however, Meta had given EU users until June 26 to opt out. Noyb had alleged that in going this route, Meta planned to use “dark patterns” to thwart AI opt-outs in the EU and collect as much data as possible to fuel undisclosed AI technologies. Noyb urgently argued that once users’ data is in the system, “users seem to have no option of ever having it removed.”
Noyb said that the “obvious explanation” for Meta seemingly halting its plans was pushback from EU officials, but the privacy advocacy group also warned EU users that Meta’s privacy policy has not yet been fully updated to reflect the pause.
“We welcome this development but will monitor this closely,” Max Schrems, Noyb chair, said in a statement provided to Ars. “So far there is no official change of the Meta privacy policy, which would make this commitment legally binding. The cases we filed are ongoing and will need a determination.”
Ars was not immediately able to reach Meta for comment.
The US Department of Justice has started cracking down on the use of AI image generators to produce child sexual abuse materials (CSAM).
On Monday, the DOJ arrested Steven Anderegg, a 42-year-old “extremely technologically savvy” Wisconsin man who allegedly used Stable Diffusion to create “thousands of realistic images of prepubescent minors,” which were then distributed on Instagram and Telegram.
The cops were tipped off to Anderegg’s alleged activities after Instagram flagged direct messages that were sent on Anderegg’s Instagram account to a 15-year-old boy. Instagram reported the messages to the National Center for Missing and Exploited Children (NCMEC), which subsequently alerted law enforcement.
During the Instagram exchange, the DOJ found that Anderegg sent sexually explicit AI images of minors soon after the teen made his age known, alleging that “the only reasonable explanation for sending these images was to sexually entice the child.”
According to the DOJ’s indictment, Anderegg is a software engineer with “professional experience working with AI.” Because of his “special skill” in generative AI (GenAI), he was allegedly able to generate the CSAM using a version of Stable Diffusion, “along with a graphical user interface and special add-ons created by other Stable Diffusion users that specialized in producing genitalia.”
After Instagram reported Anderegg’s messages to the minor, cops seized Anderegg’s laptop and found “over 13,000 GenAI images, with hundreds—if not thousands—of these images depicting nude or semi-clothed prepubescent minors lasciviously displaying or touching their genitals” or “engaging in sexual intercourse with men.”
In his messages to the teen, Anderegg seemingly “boasted” about his skill in generating CSAM, the indictment said. The DOJ alleged that evidence from his laptop showed that Anderegg “used extremely specific and explicit prompts to create these images,” including “specific ‘negative’ prompts—that is, prompts that direct the GenAI model on what not to include in generated content—to avoid creating images that depict adults.” These go-to prompts were stored on his computer, the DOJ alleged.
Anderegg is currently in federal custody and has been charged with production, distribution, and possession of AI-generated CSAM, as well as “transferring obscene material to a minor under the age of 16,” the indictment said.
Because the DOJ suspected that Anderegg intended to use the AI-generated CSAM to groom a minor, the DOJ is arguing that there are “no conditions of release” that could prevent him from posing a “significant danger” to his community while the court mulls his case. The DOJ warned the court that it’s highly likely that any future contact with minors could go unnoticed, as Anderegg is seemingly tech-savvy enough to hide any future attempts to send minors AI-generated CSAM.
“He studied computer science and has decades of experience in software engineering,” the indictment said. “While computer monitoring may address the danger posed by less sophisticated offenders, the defendant’s background provides ample reason to conclude that he could sidestep such restrictions if he decided to. And if he did, any reoffending conduct would likely go undetected.”
If convicted of all four counts, he could face “a total statutory maximum penalty of 70 years in prison and a mandatory minimum of five years in prison,” the DOJ said. Partly because of “special skill in GenAI,” the DOJ—which described its evidence against Anderegg as “strong”—suggested that they may recommend a sentencing range “as high as life imprisonment.”
Announcing Anderegg’s arrest, Deputy Attorney General Lisa Monaco made it clear that creating AI-generated CSAM is illegal in the US.
“Technology may change, but our commitment to protecting children will not,” Monaco said. “The Justice Department will aggressively pursue those who produce and distribute child sexual abuse material—or CSAM—no matter how that material was created. Put simply, CSAM generated by AI is still CSAM, and we will hold accountable those who exploit AI to create obscene, abusive, and increasingly photorealistic images of children.”
In a lawsuit that seems determined to ignore that Section 230 exists, Robert F. Kennedy Jr. has sued Meta for allegedly shadowbanning his million-dollar documentary, Who Is Bobby Kennedy? and preventing his supporters from advocating for his presidential campaign.
According to Kennedy, Meta is colluding with the Biden administration to sway the 2024 presidential election by suppressing Kennedy’s documentary and making it harder to support Kennedy’s candidacy. This allegedly has caused “substantial donation losses,” while also violating the free speech rights of Kennedy, his supporters, and his film’s production company, AV24.
Meta had initially restricted the documentary on Facebook and Instagram but later fixed the issue after discovering that the film was mistakenly flagged by the platforms’ automated spam filters.
But Kennedy’s complaint claimed that Meta is still “brazenly censoring speech” by “continuing to throttle, de-boost, demote, and shadowban the film.” In an exhibit, Kennedy’s lawyers attached screenshots representing “hundreds” of Facebook and Instagram users whom Meta allegedly sent threats, intimidated, and sanctioned after they shared the documentary.
Some of these users remain suspended on Meta platforms, the complaint alleged. Others whose temporary suspensions have been lifted claimed that their posts are still being throttled, though, and Kennedy’s lawyers earnestly insisted that an exchange with Meta’s chatbot proves it.
Two days after the documentary’s release, Kennedy’s team apparently asked the Meta AI assistant, “When users post the link whoisbobbykennedy.com, can their followers see the post in their feeds?”
“I can tell you that the link is currently restricted by Meta,” the chatbot answered.
Chatbots, of course, are notoriously inaccurate sources of information, and Meta AI’s terms of service note this. In a section labeled “accuracy,” Meta warns that chatbot responses “may not reflect accurate, complete, or current information” and should always be verified.
Perhaps more significantly, there is little reason to think that Meta’s chatbot would have access to information about internal content moderation decisions.
Techdirt’s Mike Masnick mocked Kennedy’s reliance on the chatbot in the case. He noted that Kennedy seemed to have no evidence of the alleged shadow-banning, while there’s plenty of evidence that Meta’s spam filters accidentally remove non-violative content all the time.
Meta’s chatbot is “just a probabilistic stochastic parrot, repeating a probable sounding answer to users’ questions,” Masnick wrote. “And these idiots think it’s meaningful evidence. This is beyond embarrassing.”
Neither Meta nor Kennedy’s lawyer, Jed Rubenfeld, responded to Ars’ request to comment.
Brussels has opened an in-depth probe into Meta over concerns it is failing to do enough to protect children from becoming addicted to social media platforms such as Instagram.
The European Commission, the EU’s executive arm, announced on Thursday it would look into whether the Silicon Valley giant’s apps were reinforcing “rabbit hole” effects, where users get drawn ever deeper into online feeds and topics.
EU investigators will also look into whether Meta, which owns Facebook and Instagram, is complying with legal obligations to provide appropriate age-verification tools to prevent children from accessing inappropriate content.
The probe is the second into the company under the EU’s Digital Services Act. The landmark legislation is designed to police content online, with sweeping new rules on the protection of minors.
It also has mechanisms to force Internet platforms to reveal how they are tackling misinformation and propaganda.
The DSA, which was approved last year, imposes new obligations on very large online platforms with more than 45 million users in the EU. If Meta is found to have broken the law, Brussels can impose fines of up to 6 percent of a company’s global annual turnover.
Repeat offenders can even face bans in the single market as an extreme measure to enforce the rules.
Thierry Breton, commissioner for internal market, said the EU was “not convinced” that Meta “has done enough to comply with the DSA obligations to mitigate the risks of negative effects to the physical and mental health of young Europeans on its platforms Facebook and Instagram.”
“We are sparing no effort to protect our children,” Breton added.
Meta said: “We want young people to have safe, age-appropriate experiences online and have spent a decade developing more than 50 tools and policies designed to protect them. This is a challenge the whole industry is facing, and we look forward to sharing details of our work with the European Commission.”
In the investigation, the commission said it would focus on whether Meta’s platforms were putting in place “appropriate and proportionate measures to ensure a high level of privacy, safety, and security for minors.” It added that it was placing special emphasis on default privacy settings for children.
Brussels is especially concerned whether the social media company’s platforms are properly moderating content from Russian sources that may try to destabilize upcoming elections across Europe.
Meta defended its moderating practices and said it had appropriate systems in place to stop the spread of disinformation on its platforms.
Meta will open up the operating system that runs on its Quest mixed reality headsets to other technology companies, it announced today.
What was previously simply called Quest software will be called Horizon OS, and the goal will be to move beyond the general-use Quest devices to more purpose-specific devices, according to an Instagram video from Meta CEO Mark Zuckerberg.
There will be headsets focused purely on watching TV and movies on virtual screens, with the emphasis on high-end OLED displays. There will also be headsets that are designed to be as light as possible at the expense of performance for productivity and exercise uses. And there will be gaming-oriented ones.
The announcement named three partners to start. Asus will produce a gaming headset under its Republic of Gamers (ROG) brand, Lenovo will make general purpose headsets with an emphasize on “productivity, learning, and entertainment,” and Xbox and Meta will team up to deliver a special edition of the Meta Quest that will come bundled with an Xbox controller and Xbox Cloud Gaming and Game Pass.
Users running Horizon OS devices from different manufacturers will be able to stay connected in the operating system’s social layer of “identities, avatars, social graphs, and friend groups” and will be able to enjoy shared virtual spaces together across devices.
The announcement comes after Meta became an early leader in the relatively small but interesting consumer mixed reality space but with diminishing returns on new devices as the market saturates.
Further, Apple recently entered the fray with its Vision Pro headset. The Vision Pro is not really a direct competitor to Meta’s Quest devices today—it’s far more expensive and loaded with higher-end tech—but it may only be the opening volley in a long competition between the companies.
Meta’s decision to make Horizon OS a more open platform for partner OEMs in the face of Apple’s usual focus on owning and integrating as much of the software, hardware, and services in its device as it can mirrors the smartphone market. There, Google’s Android (on which Horizon OS is based) runs on a variety of devices from a wide range of companies, while Apple’s iOS runs only on Apple’s own iPhones.
Meta also says it is working on a new spatial app framework to make it easier for developers with experience on mobile to start making mixed reality apps for Horizon OS and that it will start “removing the barriers between the Meta Horizon Store and App Lab, which lets any developer who meets basic technical and content requirements release software on the platform.”
Pricing, specs, and release dates have not been announced for any of the new devices. Zuckerberg admitted it’s “probably going to take a couple of years” for this ecosystem of hardware devices to roll out.
A 28-year-old Delaware woman, Hadja Kone, was arrested after cops linked her to an international sextortion scheme targeting thousands of victims—mostly young men and including some minors, the US Department of Justice announced Friday.
Citing a recently unsealed indictment, the DOJ alleged that Kone and co-conspirators “operated an international, financially motivated sextortion and money laundering scheme in which the conspirators engaged in cyberstalking, interstate threats, money laundering, and wire fraud.”
Through the scheme, conspirators allegedly sought to extort about $6 million from “thousands of potential victims,” the DOJ said, and ultimately successfully extorted approximately $1.7 million.
Young men from the United States, Canada, and the United Kingdom fell for the scheme, the DOJ said. They were allegedly targeted by scammers posing as “young, attractive females online,” who initiated conversations by offering to send sexual photographs or video recordings, then invited victims to “web cam” or “live video chat” sessions.
“Unbeknownst to the victims, during the web cam/live video chats,” the DOJ said, the scammers would “surreptitiously” record the victims “as they exposed their genitals and/or engaged in sexual activity.” The scammers then threatened to publish the footage online or else share the footage with “the victims’ friends, family members, significant others, employers, and co-workers,” unless payments were sent, usually via Cash App or Apple Pay.
Much of these funds were allegedly transferred overseas to Kone’s accused co-conspirators, including 22-year-old Siaka Ouattara of the West African country the Ivory Coast. Ouattara was arrested by Ivorian authorities in February, the DOJ said.
“If convicted, Kone and Ouattara each face a maximum penalty of 20 years in prison for each conspiracy count and money laundering count, and a maximum penalty of 20 years in prison for each wire fraud count,” the DOJ said.
The FBI has said that it has been cracking down on sextortion after “a huge increase in the number of cases involving children and teens being threatened and coerced into sending explicit images online.” In 2024, the FBI announced a string of arrests, but none of the schemes so far have been as vast or far-reaching as the scheme that Kone allegedly helped operate.
In January, the FBI issued a warning about the “growing threat” to minors, warning parents that victims are “typically males between the ages of 14 to 17, but any child can become a victim.” Young victims are at risk of self-harm or suicide, the FBI said.
“From October 2021 to March 2023, the FBI and Homeland Security Investigations received over 13,000 reports of online financial sextortion of minors,” the FBI’s announcement said. “The sextortion involved at least 12,600 victims—primarily boys—and led to at least 20 suicides.”
For years, reports have shown that payment apps have been used in sextortion schemes with seemingly little intervention. When it comes to protecting minors, sextortion protections seem sparse, as neither Apple Pay nor Cash App appear to have any specific policies to combat the issue. However, both apps only allow minors over 13 to create accounts with authorized adult supervisors.
Apple and Cash App did not immediately respond to Ars’ request to comment.
Instagram, Snapchat add sextortion protections
Some social media platforms are responding to the spike in sextortion targeting minors.
Last year, Snapchat released a report finding that nearly two-thirds of more than 6,000 teens and young adults in six countries said that “they or their friends have been targeted in online ‘sextortion’ schemes” across many popular social media platforms. As a result of that report and prior research, Snapchat began allowing users to report sextortion specifically.
“Under the reporting menu for ‘Nudity or sexual content,’ a Snapchatter’s first option is to click, ‘They leaked/are threatening to leak my nudes,'” the report said.
Additionally, the DOJ’s announcement of Kone’s arrest came one day after Instagram confirmed that it was “testing new features to help protect young people from sextortion and intimate image abuse, and to make it more difficult for potential scammers and criminals to find and interact with teens.”
One feature will by default blur out sexual images shared over direct message, which Instagram said would protect minors from “scammers who may send nude images to trick people into sending their own images in return.” Instagram will also provide safety tips to anyone receiving a sexual image over DM, “encouraging them to report any threats to share their private images and reminding them that they can say no to anything that makes them feel uncomfortable.”
Perhaps more impactful, Instagram claimed that it was “developing technology to help identify where accounts may potentially be engaging in sextortion scams, based on a range of signals that could indicate sextortion behavior.” Having better signals helps Instagram to make it “harder for potential sextortion accounts to message or interact with people,” the platform said, by hiding those requests. Instagram also by default blocks adults from messaging users under 16 in some countries and under 18 in others.
Instagram said that other tech companies have also started “sharing more signals about sextortion accounts” through Lantern, a program that Meta helped to found with the Tech Coalition to prevent child sexual exploitation. Snapchat also participates in the cross-platform research.
According to the special agent in charge of the FBI’s Norfolk field office, Brian Dugan, “one of the best lines of defense to stopping a crime like this is to educate our most vulnerable on common warning signs, as well as empowering them to come forward if they are ever victimized.”
Both Instagram and Snapchat said they were also increasing sextortion resources available to educate young users.
“We know that sextortion is a risk teens and adults face across a range of platforms, and have developed tools and resources to help combat it,” Snap’s spokesperson told Ars. “We have extra safeguards for teens to protect against unwanted contact, and don’t offer public friend lists, which we know can be used to extort people. We also want to help young people learn the signs of this type of crime, and recently launched in-app resources to raise awareness of how to spot and report it.”