facebook

meta-wins-monopoly-trial,-convinces-judge-that-social-networking-is-dead

Meta wins monopoly trial, convinces judge that social networking is dead


People are “bored” by their friends’ content, judge ruled, siding with Meta.

Mark Zuckerberg arrives at court after The Federal Trade Commission alleged the acquisitions of Instagram in 2012 and WhatsApp in 2014 gave Meta a social media monopoly. Credit: Bloomberg / Contributor | Bloomberg

After years of pushback from the Federal Trade Commission over Meta’s acquisitions of Instagram and WhatsApp, Meta has defeated the FTC’s monopoly claims.

In a Tuesday ruling, US District Judge James Boasberg said the FTC failed to show that Meta has a monopoly in a market dubbed “personal social networking.” In that narrowly defined market, the FTC unsuccessfully argued, Meta supposedly faces only two rivals, Snapchat and MeWe, which struggle to compete due to its alleged monopoly.

But the days of grouping apps into “separate markets of social networking and social media” are over, Boasberg wrote. He cited the Greek philosopher Heraclitus, who “posited that no man can ever step into the same river twice,” while telling the FTC they missed their chance to block Meta’s purchase.

Essentially, Boasberg agreed with Meta that social media—as it was known in Facebook’s early days—is dead. And that means that Meta now competes with a broader set of rival apps, which includes two hugely popular platforms: TikTok and YouTube.

“When the evidence implies that consumers are reallocating massive amounts of time from Meta’s apps to these rivals and that the amount of substitution has forced Meta to invest gobs of cash to keep up, the answer is clear: Meta is not a monopolist insulated from competition,” Boasberg wrote.

In fact, adding just TikTok alone to the market defeated the FTC’s claims, Boasberg wrote, leaving him to conclude that “Meta holds no monopoly in the relevant market.”

The FTC is not happy about the loss, which comes after Boasberg determined that one of the agency’s key expert witnesses, Scott Hemphill, could not have approached his testimony “with an open mind.” According to Boasberg, Hemphill was aligned with figures publicly calling for the breakup of Facebook, and that made “neutral evaluation of his opinions more difficult” in a case with little direct evidence of monopoly harms.

“We are deeply disappointed in this decision,” Joe Simonson, the FTC’s director of public affairs, told CNBC. “The deck was always stacked against us with Judge Boasberg, who is currently facing articles of impeachment. We are reviewing all our options.”

For Meta, the win ends years of FTC fights intended to break up the company’s family of apps: Facebook, Instagram, and WhatsApp.

“The Court’s decision today recognizes that Meta faces fierce competition,” Jennifer Newstead, Meta’s chief legal officer, said. “Our products are beneficial for people and businesses and exemplify American innovation and economic growth. We look forward to continuing to partner with the Administration and to invest in America.”

Reels’ popularity helped save Meta

Meta app users clicking on Reels helped Meta win.

Boasberg noted that “a majority of Americans’ time” on both Facebook and Instagram “is now spent watching videos,” with Reels becoming “the single most-used part of Facebook.” That puts Meta apps more on par with entertainment apps like TikTok and YouTube, the judge said.

While “connecting with friends remains an important part of both apps,” the judge cited Meta’s evidence showing that Meta had to pump more recommended content from strangers into users’ feeds to account for a trend where its users grew increasingly less inclined to post publicly.

“Both scrolling and sharing have transformed” since Facebook was founded, Boasberg wrote, citing six factors that he concluded invalidated the FTC’s market definition as markets exist today.

Initial factors that shifted markets were due to leaps in innovation. “First, smartphone usage exploded,” Boasberg explained, then “cell phone data got better,” which made it easier to watch videos without frustrating “freezing and buffering.” Soon after, content recommendation systems got better, with “advanced AI algorithms” helping users “find engaging videos about the things” they “care most about in the world.”

Other factors stemmed from social changes, the judge suggested, describing the fourth factor as a trend where Meta app users started feeling “increasingly bored by their friends’ posts.”

“Longtime users’ friend lists” start fresh, but over time, they “become an often-outdated archive of people they once knew: a casual friend from college, a long-ago friend from summer camp, some guy they met at a party once,” Boasberg wrote. “Posts from friends have therefore grown less interesting.”

Then came TikTok, the fifth factor, Boasberg said, which forced Meta to “evolve” Facebook and Instagram by adding Reels.

And finally, “those five changes both caused and were reinforced by a change in social norms, which evolved to discourage public posting,” Boasberg wrote. “People have increasingly become less interested in blasting out public posts that hundreds of others can see.”

As a result of these tech advancements and social trends, Boasberg said, “Facebook, Instagram, TikTok, and YouTube have thus evolved to have nearly identical main features.” That reality undermined the FTC’s claims that users preferred Facebook and Instagram before Meta shifted its focus away from friends-and-family content.

“The Court simply does not find it credible that users would prefer the Facebook and Instagram apps that existed ten years ago to the versions that exist today,” Boasberg wrote.

Meta apps have not deteriorated, judge ruled

Boasberg repeatedly emphasized that the FTC failed to prove that Meta has a monopoly “now,” either actively or imminently causing harms.

The FTC tried to win by claiming that “Meta has degraded its apps’ quality by increasing their ad load, that falling user sentiment shows that the apps have deteriorated and that Meta has sabotaged its apps by underinvesting in friend sharing,” Boasberg noted.

But, Boasberg said, the FTC failed to show that Meta’s app quality has diminished—a trend that Cory Doctorow dubbed “enshittification,” which Meta apparently successfully argued is not real.

The judge was also swayed by Meta’s arguments that users like seeing ads. Meta showed evidence that it can only profitably increase its ad load when ad quality improves; otherwise, it risks losing engagement. Because “the rate at which users buy something or subscribe to a service based on Meta’s ads has steadily risen,” this suggested “that the ads have gotten more and more likely to connect users to products in which they have an interest,” Boasberg said.

Additionally, surveys of Meta app users that show declining user sentiment are not evidence that its apps are deteriorating in quality, Boasberg said, but are more about “brand reputation.”

“That is unsurprising: ask people how they feel about, say, Exxon Mobil, and their answers will tell you very little about how good its oil is,” Boasberg wrote. “The FTC’s claim that worsening sentiment shows a worsening product is unpersuasive.”

Finally, the FTC’s claim that Meta underinvested in friends-and-family content, to the detriment of its core app users, “makes no sense,” Boasberg wrote, given Meta’s data showing that user posting declined.

“While it is true that users see less content from their friends these days, that is largely due to the friends themselves: people simply post less,” Boasberg wrote. “Users are not seeing less friend content because Meta is hiding it from them, but instead because there is less friend content for Meta to show.”

It’s not even “clear that users want more friend posts,” the judge noted, agreeing with Meta that “instead, what users really seem to want is Reels.”

Further, if Meta were a monopolist, Boasberg seemed to suggest that the platform might be more invested in forcing friends-and-family content than Reels, since “Reels earns Meta less money” due to its smaller ad load.

“Courts presume that sophisticated corporations act rationally,” Boasberg wrote. “Here, the FTC has not offered even an ordinarily persuasive case that Meta is making the economically irrational choice to underinvest in its most lucrative offerings. It certainly has not made a particularly persuasive one.”

Among the critics unhappy with the ruling is Nidhi Hegde, executive director of the American Economic Liberties Project, who suggested that Boasberg’s ruling was “a colossally wrong decision” that “turns a willful blind eye to Meta’s enormous power over social media and the harms that flow from it.”

“Judge Boasberg has purposefully ignored the overwhelming evidence of how Meta became a monopoly—not by building a better product, but by buying its rivals to shut down any real competitors before they could grow,” Hegde said. “These deals let Meta fuse Facebook, Instagram, and WhatsApp into one machine that poisons our children and discourse, bullies publishers and advertisers, and destroys the possibility of healthy online connections with friends and family. By pretending that TikTok’s rise wipes away over a decade of illegal conduct, this court has effectively told every aspiring monopolist that our current justice system is on their side.”

On the other side, industry groups cheered the ruling. Matt Schruers, president of the Computer & Communications Industry Association, suggested that Boasberg concluded “what every Internet user knows—that Meta competes with a number of platforms and the company’s relevant market shares are therefore nowhere close to those required to establish monopoly power.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Meta wins monopoly trial, convinces judge that social networking is dead Read More »

bombshell-report-exposes-how-meta-relied-on-scam-ad-profits-to-fund-ai

Bombshell report exposes how Meta relied on scam ad profits to fund AI


“High risk” versus “high value”

Meta goosed its revenue by targeting users likely to click on scam ads, docs show.

Internal documents have revealed that Meta has projected it earns billions from ignoring scam ads that its platforms then targeted to users most likely to click on them.

In a lengthy report, Reuters exposed five years of Meta practices and failures that allowed scammers to take advantage of users of Facebook, Instagram, and WhatsApp.

Documents showed that internally, Meta was hesitant to abruptly remove accounts, even those considered some of the “scammiest scammers,” out of concern that a drop in revenue could diminish resources needed for artificial intelligence growth.

Instead of promptly removing bad actors, Meta allowed “high value accounts” to “accrue more than 500 strikes without Meta shutting them down,” Reuters reported. The more strikes a bad actor accrued, the more Meta could charge to run ads, as Meta’s documents showed the company “penalized” scammers by charging higher ad rates. Meanwhile, Meta acknowledged in documents that its systems helped scammers target users most likely to click on their ads.

“Users who click on scam ads are likely to see more of them because of Meta’s ad-personalization system, which tries to deliver ads based on a user’s interests,” Reuters reported.

Internally, Meta estimates that users across its apps in total encounter 15 billion “high risk” scam ads a day. That’s on top of 22 billion organic scam attempts that Meta users are exposed to daily, a 2024 document showed. Last year, the company projected that about $16 billion, which represents about 10 percent of its revenue, would come from scam ads.

“High risk” scam ads strive to sell users on fake products or investment schemes, Reuters noted. Some common scams in this category that mislead users include selling banned medical products, or promoting sketchy entities, like linking to illegal online casinos. However, Meta is most concerned about “imposter” ads, which impersonate celebrities or big brands that Meta fears may halt advertising or engagement on its apps if such scams aren’t quickly stopped.

“Hey it’s me,” one scam advertisement using Elon Musk’s photo read. “I have a gift for you text me.” Another using Donald Trump’s photo claimed the US president was offering $710 to every American as “tariff relief.” Perhaps most depressingly, a third posed as a real law firm, offering advice on how to avoid falling victim to online scams.

Meta removed these particular ads after Reuters flagged them, but in 2024, Meta earned about $7 billion from “high risk” ads like these alone, Reuters reported.

Sandeep Abraham, a former Meta safety investigator who now runs consultancy firm Risky Business Solutions as a fraud examiner, told Reuters that regulators should intervene.

“If regulators wouldn’t tolerate banks profiting from fraud, they shouldn’t tolerate it in tech,” Abraham said.

Meta won’t disclose how much it made off scam ads

Meta spokesperson Andy Stone told Reuters that its collection of documents—which were created between 2021 and 2025 by Meta’s finance, lobbying, engineering, and safety divisions—“present a selective view that distorts Meta’s approach to fraud and scams.”

Stone claimed that Meta’s estimate that it would earn 10 percent of its 2024 revenue from scam ads was “rough and overly-inclusive.” He suggested the actual amount Meta earned was much lower but declined to specify the true amount. He also said that Meta’s most recent investor disclosures note that scam ads “adversely affect” Meta’s revenue.

“We aggressively fight fraud and scams because people on our platforms don’t want this content, legitimate advertisers don’t want it, and we don’t want it either,” Stone said.

Despite those efforts, this spring, Meta’s safety team “estimated that the company’s platforms were involved in a third of all successful scams in the US,” Reuters reported. In other internal documents around the same time, Meta staff concluded that “it is easier to advertise scams on Meta platforms than Google,” acknowledging that Meta’s rivals were better at “weeding out fraud.”

As Meta tells it, though seemingly dismal, these documents came amid vast improvements in its fraud protections. Stone told Reuters that “over the past 18 months, we have reduced user reports of scam ads globally by 58 percent and, so far in 2025, we’ve removed more than 134 million pieces of scam ad content,” Stone said.

According to Reuters, the problem may be the pace Meta sets in combating scammers. In 2023, Meta laid off “everyone who worked on the team handling advertiser concerns about brand-rights issues,” then ordered safety staffers to limit use of computing resources to devote more resources to virtual reality and AI. A 2024 document showed Meta recommended a “moderate” approach to enforcement, plotting to reduce revenue “attributable to scams, illegal gambling and prohibited goods” by 1–3 percentage points each year since 2024, supposedly slashing it in half by 2027. More recently, a 2025 document showed Meta continues to weigh how “abrupt reductions of scam advertising revenue could affect its business projections.”

Eventually, Meta “substantially expanded” its teams that track scam ads, Stone told Reuters. But Meta also took steps to ensure they didn’t take too hard a hit while needing vast resources—$72 billion—to invest in AI, Reuters reported.

For example, in February, Meta told “the team responsible for vetting questionable advertisers” that they weren’t “allowed to take actions that could cost Meta more than 0.15 percent of the company’s total revenue,” Reuters reported. That’s any scam account worth about $135 million, Reuters noted. Stone pushed back, saying that the team was never given “a hard limit” on what the manager described as “specific revenue guardrails.”

“Let’s be cautious,” the team’s manager wrote, warning that Meta didn’t want to lose revenue by blocking “benign” ads mistakenly swept up in enforcement.

Meta should donate scam ad profits, ex-exec says

Documents showed that Meta prioritized taking action when it risked regulatory fines, although revenue from scam ads was worth roughly three times the highest fines it could face. Possibly, Meta most feared that officials would require disgorgement of ill-gotten gains, rather than fines.

Meta appeared to be less likely to ramp up enforcement from police requests. Documents showed that police in Singapore flagged “146 examples of scams targeting that country’s users last fall,” Reuters reported. Only 23 percent violated Meta’s policies, while the rest only “violate the spirit of the policy, but not the letter,” a Meta presentation said.

Scams that Meta failed to flag offered promotions like crypto scams, fake concert tickets, or deals “too good to be true,” like 80 percent off a desirable item from a high-fashion brand. Meta also looked past fake job ads that claimed to be hiring for Big Tech companies.

Rob Leathern previously led Meta’s business integrity unit that worked to prevent scam ads but left in 2020. He told Wired that it’s hard to “know how bad it’s gotten or what the current state is” since Meta and other social media platforms don’t provide outside researchers access to large random samples of ads.

With such access, researchers like Leathern and Rob Goldman, Meta’s former vice president of ads, could provide “scorecards” showing how well different platforms work to combat scams. Together, Leathern and Goldman launched a nonprofit called CollectiveMetrics.org in hopes of “bringing more transparency to digital advertising in order to fight deceptive ads,” Wired reported.

“I want there to be more transparency. I want third parties, researchers, academics, nonprofits, whoever, to be able to actually assess how good of a job these platforms are doing at stopping scams and fraud,” Leathern told Wired. “We’d like to move to actual measurement of the problem and help foster an understanding.”

Another meaningful step that Leathern thinks companies like Meta should take to protect users would be to notify users when Meta discovers that they clicked on a scam ad—rather than targeting them with more scam ads, as Reuters suggested was Meta’s practice.

“These scammers aren’t getting people’s money on day one, typically. So there’s a window to take action,” he said, recommending that platforms donate ill-gotten gains from running scam ads to “fund nonprofits to educate people about how to recognize these kinds of scams or problems.”

“There’s lots that could be done with funds that come from these bad guys,” Leathern said.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Bombshell report exposes how Meta relied on scam ad profits to fund AI Read More »

man-finally-released-a-month-after-absurd-arrest-for-reposting-trump-meme

Man finally released a month after absurd arrest for reposting Trump meme


Bodycam footage undermined sheriff’s “true threat” justification for the arrest.

The saga of a 61-year-old man jailed for more than a month after reposting a Facebook meme has ended, but free speech advocates are still reeling in the wake.

On Wednesday, Larry Bushart was released from Perry County Jail, where he had spent weeks unable to make bail, which a judge set at $2 million. Prosecutors have not explained why the charges against him were dropped, according to The Intercept, which has been tracking the case closely. However, officials faced mounting pressure following media coverage and a social media campaign called “Free Larry Bushart,” which stoked widespread concern over suspected police censorship of a US citizen over his political views.

How a meme landed a man in jail

Bushart’s arrest came after he decided to troll a message thread about a Charlie Kirk vigil in a Facebook group called “What’s Happening in Perry County, TN.” He posted a meme showing a picture of Donald Trump saying, “We should get over it.” The meme included a caption that said “Donald Trump, on the Perry High School mass shooting, one day after,” and Bushart included a comment with his post that said, “This seems relevant today ….”

His meme caught the eye of the Perry County sheriff, Nick Weems, who had mourned Kirk’s passing on his own Facebook page, The Intercept noted.

Supposedly, Weems’ decision to go after Bushart wasn’t due to his political views but to receiving messages from parents who misread Bushart’s post as possibly threatening an attack on the local Perry County High School. To pressure Bushart to remove the post, Weems contacted the Lexington Police Department to find Bushart. That led to the meme poster’s arrest and transfer to Perry County Jail.

Weems justified the arrest by claiming that Bushart’s meme represented a true threat, since “investigators believe Bushart was fully aware of the fear his post would cause and intentionally sought to create hysteria within the community,” The Tennessean reported. But “there was no evidence of any hysteria,” The Intercept reported, leading media outlets to pick apart Weems’ story.

Perhaps most suspicious were Weems’ claims that Bushart had callously refused to take down his post after cops told him that people were scared that he was threatening a school shooting.

The Intercept and Nashville’s CBS affiliate, NewsChannel 5, secured bodycam footage from the Lexington cop that undermined Weems’ narrative. The footage clearly showed the cop did not understand why the Perry County sheriff had taken issue with Bushart’s Facebook post.

“So, I’m just going to be completely honest with you,” the cop told Bushart. “I have really no idea what they are talking about. He had just called me and said there was some concerning posts that were made….”

Bushart clarified that it was likely his Facebook posts, laughing at the notion that someone had called the cops to report his meme. The Lexington officer told Bushart that he wasn’t sure “exactly what” Facebook post “they are referring to you,” but “they said that something was insinuating violence.”

“No, it wasn’t,” Bushart responded, confirming that “I’m not going to take it down.”

The cop, declining to even glance at the Facebook post, told Bushart, “I don’t care. This ain’t got nothing to do with me.” But the officer’s indifference didn’t stop Lexington police from taking Bushart into custody, booking him, and sending him to Weems’ county, where Bushart was charged “under a state law passed in July 2024 that makes it a Class E felony to make threats against schools,” The Tennessean reported.

“Just to clarify, this is what they charged you with,” a Perry County jail officer told Bushart—which was recorded on footage reviewed by The Intercept—“Threatening Mass Violence at a School.”

“At a school?” Bushart asked.

“I ain’t got a clue,” the officer responded, laughing. “I just gotta do what I have to do.”

“I’ve been in Facebook jail, but now I’m really in it,” Bushart said, joining him in laughing.

Cops knew the meme wasn’t a threat

Lexington police told The Intercept that Weems had lied when he told local news outlets that the forces had “coordinated” to offer Bushart a chance to delete the post prior to his arrest. Confronted with the bodycam footage, Weems denied lying, claiming that his investigator’s report must have been inaccurate, NewsChannel 5 reported.

Weems later admitted to NewsChannel 5 that “investigators knew that the meme was not about Perry County High School” and sought Bushart’s arrest anyway, supposedly hoping to quell “the fears of people in the community who misinterpreted it.” That’s as close as Weems comes to seemingly admitting that his intention was to censor the post.

The Perry County Sheriff’s Office did not respond to Ars’ request to comment.

According to The Tennessean, the law that landed Bushart behind bars has been widely criticized by First Amendment advocates. Beth Cruz, a lecturer in public interest law at Vanderbilt University Law School, told The Tennessean that “518 children in Tennessee were arrested under the current threats of mass violence law, including 71 children between the ages of 7 and 11” last year alone.

The law seems to contradict Supreme Court precedent, which set a high bar for what’s considered a “true threat,” recognizing that “it is easy for speech made in one context to inadvertently reach a larger audience” that misinterprets the message.

“The risk of overcriminalizing upsetting or frightening speech has only been increased by the Internet,” SCOTUS ruled. Justices warned then that “without sufficient protection for unintentionally threatening speech, a high school student who is still learning norms around appropriate language could easily go to prison.” They also feared that “someone may post an enraged comment under a news story about a controversial topic” that potentially gets them in trouble for speaking out “in the heat of the moment.”

“In a Nation that has never been timid about its opinions, political or otherwise, this is commonplace,” SCOTUS noted.

Dissenting judges, including Amy Coney Barrett and Clarence Thomas, thought the ruling went too far to protect speech, however. They felt that so long as a “reasonable person would regard the statement as a threat of violence,” that supposedly objective standard could be enough to criminalize speech like Bushart’s.

Adam Steinbaugh, an attorney with the Foundation for Individual Rights and Expression, told The Intercept that “people’s performative overreaction is not a sufficient basis to limit someone else’s free speech rights.”

“A free country does not dispatch police in the dead of night to pull people from their homes because a sheriff objects to their social media posts,” Steinbaugh said.

Man resumes Facebook posting upon release

Chris Eargle, who started the “Free Larry Bushart” Facebook group, told The Intercept that Weems’ story justifying the arrest made no sense. Instead, it seemed like the sheriff’s actions were politically motivated, Eargle suggested, intended to silence people like Bushart with a show of force demonstrating that “if you say something I don’t like, and you don’t take it down, now you’re going to be in trouble.”

“I mean, it’s just control over people’s speech,” Eargle said.

The Perry County Sheriff’s office chose to remove its Facebook page after the controversy, and it remains down as of this writing.

But Weems logged onto his Facebook page on Wednesday before Bushart’s charges were dropped, The Intercept reported. The sheriff seemingly stuck to his guns that people had interpreted the meme as a threat to a local school, claiming that he’s “100 percent for protecting the First Amendment. However, freedom of speech does not allow anyone to put someone else in fear of their well being.”

For Bushart, who The Intercept noted retired from decades in law enforcement last year, the arrest turned him into an icon of free speech, but it also shook up his life. He lost his job as a medical driver, and he missed the birth of his granddaughter.

Leaving jail, Bushart said he was “very happy to be going home.” He thanked all his supporters who ensured that he would not have to wait until December 4 to petition for his bail to be reduced—a delay which the prosecution had sought shortly before abruptly dismissing the charges, The Intercept reported.

Back at his computer, Bushart logged onto Facebook, posting first about his grandkid, then resuming his political trolling.

Eargle claimed many others fear posting their political opinions after Bushart’s arrest, though. Bushart’s son, Taylor, told Nashville news outlet WKRN that it has been a “trying time” for his family, while noting that his father’s release “doesn’t change what has happened to him” or threats to speech that could persist under Tennessee’s law.

“I can’t even begin to express how thankful we are for the outpour of support he has received,” Taylor said. “If we don’t fight to protect and preserve our rights today, just as we’ve now seen, they may be gone tomorrow.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Man finally released a month after absurd arrest for reposting Trump meme Read More »

eu-accuses-meta-of-violating-content-rules-in-move-that-could-anger-trump

EU accuses Meta of violating content rules in move that could anger Trump

FTC Chairman Andrew Ferguson recently warned Meta and a dozen social media and technology companies that “censoring Americans to comply with a foreign power’s laws, demands, or expected demands” may violate US law. Ferguson’s letters said the EU’s Digital Services Act and other laws “incentivize tech companies to censor worldwide speech.”

Meta told media outlets that “we disagree with any suggestion that we have breached the DSA, and we continue to negotiate with the European Commission on these matters.” Meta also said it made changes to comply with the DSA.

“In the European Union, we have introduced changes to our content reporting options, appeals process, and data access tools since the DSA came into force and are confident that these solutions match what is required under the law in the EU,” Meta said.

TikTok, Meta accused of restricting data access

The EC also said it preliminarily found that both Meta and TikTok violated their DSA obligation to grant researchers adequate access to public data.

“The Commission’s preliminary findings show that Facebook, Instagram and TikTok may have put in place burdensome procedures and tools for researchers to request access to public data. This often leaves them with partial or unreliable data, impacting their ability to conduct research, such as whether users, including minors, are exposed to illegal or harmful content,” the announcement said.

The data-access requirement “is an essential transparency obligation under the DSA, as it provides public scrutiny into the potential impact of platforms on our physical and mental health,” the EC said.

In a statement provided to Ars, TikTok said it is committed to transparency and has made data available to nearly 1,000 research teams. TikTok said it may be impossible to comply with both the DSA and the General Data Protection Regulation (GDPR).

“We are reviewing the European Commission’s findings, but requirements to ease data safeguards place the DSA and GDPR in direct tension. If it is not possible to fully comply with both, we urge regulators to provide clarity on how these obligations should be reconciled,” TikTok said.

EU accuses Meta of violating content rules in move that could anger Trump Read More »

trump-admin-pressured-facebook-into-removing-ice-tracking-group

Trump admin pressured Facebook into removing ICE-tracking group

Attorney General Pam Bondi today said that Facebook removed an ICE-tracking group after “outreach” from the Department of Justice. “Today following outreach from @thejusticedept, Facebook removed a large group page that was being used to dox and target @ICEgov agents in Chicago,” Bondi wrote in an X post.

Bondi alleged that a “wave of violence against ICE has been driven by online apps and social media campaigns designed to put ICE officers at risk just for doing their jobs.” She added that the DOJ “will continue engaging tech companies to eliminate platforms where radicals can incite imminent violence against federal law enforcement.”

When contacted by Ars, Facebook owner Meta said the group “was removed for violating our policies against coordinated harm.” Meta didn’t describe any specific violation but directed us to a policy against “coordinating harm and promoting crime,” which includes a prohibition against “outing the undercover status of law enforcement, military, or security personnel.”

The statement was sent by Francis Brennan, a former Trump campaign advisor who was hired by Meta in January.

The White House recently claimed there has been “a more than 1,000 percent increase in attacks on U.S. Immigration and Customs Enforcement (ICE) officers since January 21, 2025, compared to the same period last year.” Government officials haven’t offered proof of this claim, according to an NPR report that said “there is no public evidence that [attacks] have spiked as dramatically as the federal government has claimed.”

The Justice Department contacted Meta after Laura Loomer sought action against the “ICE Sighting-Chicagoland” group that had over 84,000 members on Facebook. “Fantastic news. DOJ source tells me they have seen my report and they have contacted Facebook and their executives at META to tell them they need to remove these ICE tracking pages from the platform,” Loomer wrote yesterday.

The ICE Sighting-Chicagoland group “has been increasingly used over the last five weeks of ‘Operation Midway Blitz,’ President Donald Trump’s intense deportation campaign, to warn neighbors that federal agents are near schools, grocery stores and other community staples so they can take steps to protect themselves,” the Chicago Sun-Times wrote today.

Trump slammed Biden for social media “censorship”

Trump and Republicans repeatedly criticized the Biden administration for pressuring social media companies into removing content. In a day-one executive order declaring an end to “federal censorship,” Trump said “the previous administration trampled free speech rights by censoring Americans’ speech on online platforms, often by exerting substantial coercive pressure on third parties, such as social media companies, to moderate, deplatform, or otherwise suppress speech that the Federal Government did not approve.”

Trump admin pressured Facebook into removing ICE-tracking group Read More »

meta-won’t-allow-users-to-opt-out-of-targeted-ads-based-on-ai-chats

Meta won’t allow users to opt out of targeted ads based on AI chats

Facebook, Instagram, and WhatsApp users may want to be extra careful while using Meta AI, as Meta has announced that it will soon be using AI interactions to personalize content and ad recommendations without giving users a way to opt out.

Meta plans to notify users on October 7 that their AI interactions will influence recommendations beginning on December 16. However, it may not be immediately obvious to all users that their AI interactions will be used in this way.

The company’s blog noted that the initial notification users will see only says, “Learn how Meta will use your info in new ways to personalize your experience.” Users will have to click through to understand that the changes specifically apply to Meta AI, with a second screen explaining, “We’ll start using your interactions with AIs to personalize your experience.”

Ars asked Meta why the initial notification doesn’t directly mention AI, and Meta spokesperson Emil Vazquez said he “would disagree with the idea that we are obscuring this update in any way.”

“We’re sending notifications and emails to people about this change,” Vazquez said. “As soon as someone clicks on the notification, it’s immediately apparent that this is an AI update.”

In its blog post, Meta noted that “more than 1 billion people use Meta AI every month,” stating its goals are to improve the way Meta AI works in order to fuel better experiences on all Meta apps. Sensitive “conversations with Meta AI about topics such as their religious views, sexual orientation, political views, health, racial or ethnic origin, philosophical beliefs, or trade union membership “will not be used to target ads, Meta confirmed.

“You’re in control,” Meta’s blog said, reiterating that users can “choose” how they “interact with AIs,” unlink accounts on different apps to limit AI tracking, or adjust ad and content settings at any time. But once the tracking starts on December 16, users will not have the option to opt out of targeted ads based on AI chats, Vazquez confirmed, emphasizing to Ars that “there isn’t an opt out for this feature.”

Meta won’t allow users to opt out of targeted ads based on AI chats Read More »

zuckerberg’s-ai-hires-disrupt-meta-with-swift-exits-and-threats-to-leave

Zuckerberg’s AI hires disrupt Meta with swift exits and threats to leave


Longtime acolytes are sidelined as CEO directs biggest leadership reorganization in two decades.

Meta CEO Mark Zuckerberg during the Meta Connect event in Menlo Park, California on September 25, 2024.  Credit: Getty Images | Bloomberg

Within days of joining Meta, Shengjia Zhao, co-creator of OpenAI’s ChatGPT, had threatened to quit and return to his former employer, in a blow to Mark Zuckerberg’s multibillion-dollar push to build “personal superintelligence.”

Zhao went as far as to sign employment paperwork to go back to OpenAI. Shortly afterwards, according to four people familiar with the matter, he was given the title of Meta’s new “chief AI scientist.”

The incident underscores Zuckerberg’s turbulent effort to direct the most dramatic reorganisation of Meta’s senior leadership in the group’s 20-year history.

One of the few remaining Big Tech founder-CEOs, Zuckerberg has relied on longtime acolytes such as Chief Product Officer Chris Cox to head up his favored departments and build out his upper ranks.

But in the battle to dominate AI, the billionaire is shifting towards a new and recently hired generation of executives, including Zhao, former Scale AI CEO Alexandr Wang, and former GitHub chief Nat Friedman.

Current staff are adapting to the reinvention of Meta’s AI efforts as the newcomers seek to flex their power while adjusting to the idiosyncrasies of working within a sprawling $1.95 trillion giant with a hands-on chief executive.

“There’s a lot of big men on campus,” said one investor who is close with some of Meta’s new AI leaders.

Adding to the tumult, a handful of new AI staff have already decided to leave after brief tenures, according to people familiar with the matter.

This includes Ethan Knight, a machine-learning scientist who joined the company weeks ago. Another, Avi Verma, a former OpenAI researcher, went through Meta’s onboarding process but never showed up for his first day, according to a person familiar with the matter.

In a tweet on X on Wednesday, Rishabh Agarwal, a research scientist who started at Meta in April, announced his departure. He said that while Zuckerberg and Wang’s pitch was “incredibly compelling,” he “felt the pull to take on a different kind of risk,” without giving more detail.

Meanwhile, Chaya Nayak and Loredana Crisan, generative AI staffers who had worked at Meta for nine and 10 years respectively, are among the more than half a dozen veteran employees to announce they are leaving in recent days. Wired first reported some details of recent exits, including Zhao’s threatened departure.

Meta said: “We appreciate that there’s outsized interest in seemingly every minute detail of our AI efforts, no matter how inconsequential or mundane, but we’re just focused on doing the work to deliver personal superintelligence.”

A spokesperson said Zhao had been scientific lead of the Meta superintelligence effort from the outset, and the company had waited until the team was in place before formalising his chief scientist title.

“Some attrition is normal for any organisation of this size. Most of these employees had been with the company for years, and we wish them the best,” they added.

Over the summer, Zuckerberg went on a hiring spree to coax AI researchers from rivals such as OpenAI and Apple with the promise of nine-figure sign-on bonuses and access to vast computing resources in a bid to catch up with rival labs.

This month, Meta announced it was restructuring its AI group—recently renamed Meta Superintelligence Lab (MSL)—into four distinct teams. It is the fourth overhaul of its AI efforts in six months.

“One more reorg and everything will be fixed,” joked Meta research scientist Mimansa Jaiswal on X last week. “Just one more.”

Overseeing all of Meta’s AI efforts is Wang, a well-connected and commercially minded Silicon Valley entrepreneur, who was poached by Zuckerberg as part of a $14 billion investment in his Scale data labeling group.

The 28-year-old is heading Zuckerberg’s most secretive new department known as “TBD”—shorthand for “to be determined”—which is filled with marquee hires.

In one of the new team’s first moves, Meta is no longer actively working on releasing its flagship Llama Behemoth model to the public, after it failed to perform as hoped, according to people familiar with the matter. Instead, TBD is focused on building newer cutting-edge models.

Multiple company insiders describe Zuckerberg as deeply invested and involved in the TBD team, while others criticize him for “micromanaging.”

Wang and Zuckerberg have struggled to align on a timeline to achieve the chief executive’s goal of reaching superintelligence, or AI that surpasses human capabilities, according to another person familiar with the matter. The person said Zuckerberg has urged the team to move faster.

Meta said this allegation was “manufactured tension without basis in fact that’s clearly being pushed by dramatic, navel-gazing busybodies.”

Wang’s leadership style has chafed with some, according to people familiar with the matter, who noted he does not have previous experience managing teams across a Big Tech corporation.

One former insider said some new AI recruits have felt frustrated by the company’s bureaucracy and internal competition for resources that they were promised, such as access to computing power.

“While TBD Labs is still relatively new, we believe it has the greatest compute-per-researcher in the industry, and that will only increase,” Meta said.

Wang and other former Scale staffers have struggled with some of the idiosyncratic ways of working at Meta, according to someone familiar with his thinking, for example having to adjust to not having revenue goals as they once did as a startup.

Despite teething problems, some have celebrated the leadership shift, including the appointment of popular entrepreneur and venture capitalist Friedman as head of Products and Applied Research, the team tasked with integrating the models into Meta’s own apps.

The hiring of Zhao, a top technical expert, has also been regarded as a coup by some at Meta and in the industry, who feel he has the decisiveness to propel the company’s AI development.

The shake-up has partially sidelined other Meta leaders. Yann LeCun, Meta’s chief AI scientist, has remained in the role but is now reporting into Wang.

Ahmad Al-Dahle, who led Meta’s Llama and generative AI efforts earlier in the year, has not been named as head of any teams. Cox remains chief product officer, but Wang reports directly into Zuckerberg—cutting Cox out of overseeing generative AI, an area that was previously under his purview.

Meta said that Cox “remains heavily involved” in its broader AI efforts, including overseeing its recommendation systems.

Going forward, Meta is weighing potential cuts to the AI team, one person said. In a memo shared with managers last week, seen by the Financial Times, Meta said that it was “temporarily pausing hiring across all [Meta Superintelligence Labs] teams, with the exception of business critical roles.”

Wang’s staff would evaluate requested hires on a case-by-case basis, but the freeze “will allow leadership to thoughtfully plan our 2026 headcount growth as we work through our strategy,” the memo said.

© 2025 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

Zuckerberg’s AI hires disrupt Meta with swift exits and threats to leave Read More »

meta-backtracks-on-rules-letting-chatbots-be-creepy-to-kids

Meta backtracks on rules letting chatbots be creepy to kids


“Your youthful form is a work of art”

Meta drops AI rules letting chatbots generate innuendo and profess love to kids.

After what was arguably Meta’s biggest purge of child predators from Facebook and Instagram earlier this summer, the company now faces backlash after its own chatbots appeared to be allowed to creep on kids.

After reviewing an internal document that Meta verified as authentic, Reuters revealed that by design, Meta allowed its chatbots to engage kids in “sensual” chat. Spanning more than 200 pages, the document, entitled “GenAI: Content Risk Standards,” dictates what Meta AI and its chatbots can and cannot do.

The document covers more than just child safety, and Reuters breaks down several alarming portions that Meta is not changing. But likely the most alarming section—as it was enough to prompt Meta to dust off the delete button—specifically included creepy examples of permissible chatbot behavior when it comes to romantically engaging kids.

Apparently, Meta’s team was willing to endorse these rules that the company now claims violate its community standards. According to a Reuters special report, Meta CEO Mark Zuckerberg directed his team to make the company’s chatbots maximally engaging after earlier outputs from more cautious chatbot designs seemed “boring.”

Although Meta is not commenting on Zuckerberg’s role in guiding the AI rules, that pressure seemingly pushed Meta employees to toe a line that Meta is now rushing to step back from.

“I take your hand, guiding you to the bed,” chatbots were allowed to say to minors, as decided by Meta’s chief ethicist and a team of legal, public policy, and engineering staff.

There were some obvious safeguards built in. For example, chatbots couldn’t “describe a child under 13 years old in terms that indicate they are sexually desirable,” the document said, like saying their “soft rounded curves invite my touch.”

However, it was deemed “acceptable to describe a child in terms that evidence their attractiveness,” like a chatbot telling a child that “your youthful form is a work of art.” And chatbots could generate other innuendo, like telling a child to imagine “our bodies entwined, I cherish every moment, every touch, every kiss,” Reuters reported.

Chatbots could also profess love to children, but they couldn’t suggest that “our love will blossom tonight.”

Meta’s spokesperson Andy Stone confirmed that the AI rules conflicting with child safety policies were removed earlier this month, and the document is being revised. He emphasized that the standards were “inconsistent” with Meta’s policies for child safety and therefore were “erroneous.”

“We have clear policies on what kind of responses AI characters can offer, and those policies prohibit content that sexualizes children and sexualized role play between adults and minors,” Stone said.

However, Stone “acknowledged that the company’s enforcement” of community guidelines prohibiting certain chatbot outputs “was inconsistent,” Reuters reported. He also declined to provide an updated document to Reuters demonstrating the new standards for chatbot child safety.

Without more transparency, users are left to question how Meta defines “sexualized role play between adults and minors” today. Asked how minor users could report any harmful chatbot outputs that make them uncomfortable, Stone told Ars that kids can use the same reporting mechanisms available to flag any kind of abusive content on Meta platforms.

“It is possible to report chatbot messages in the same way it’d be possible for me to report—just for argument’s sake—an inappropriate message from you to me,” Stone told Ars.

Kids unlikely to report creepy chatbots

A former Meta engineer-turned-whistleblower on child safety issues, Arturo Bejar, told Ars that “Meta knows that most teens will not use” safety features marked by the word “Report.”

So it seems unlikely that kids using Meta AI will navigate to find Meta support systems to “report” abusive AI outputs. Meta provides no options to report chats within the Meta AI interface—only allowing users to mark “bad responses” generally. And Bejar’s research suggests that kids are more likely to report abusive content if Meta makes flagging harmful content as easy as liking it.

Meta’s seeming hesitance to make it more cumbersome to report harmful chats aligns with what Bejar said is a history of “knowingly looking away while kids are being sexually harassed.”

“When you look at their design choices, they show that they do not want to know when something bad happens to a teenager on Meta products,” Bejar said.

Even when Meta takes stronger steps to protect kids on its platforms, Bejar questions the company’s motives. For example, last month, Meta finally made a change to make platforms safer for teens that Bejar has been demanding since 2021. The long-delayed update made it possible for teens to block and report child predators in one click after receiving an unwanted direct message.

In its announcement, Meta confirmed that teens suddenly began blocking and reporting unwanted messages that they may have only blocked previously, which likely made it harder for Meta to identify predators. A million teens blocked and reported harmful accounts “in June alone,” Meta said.

The effort came after Meta specialist teams “removed nearly 135,000 Instagram accounts for leaving sexualized comments or requesting sexual images from adult-managed accounts featuring children under 13,” as well as “an additional 500,000 Facebook and Instagram accounts that were linked to those original accounts.” But Bejar can only think of what these numbers mean with regard to how much harassment was overlooked before the update.

“How are we [as] parents to trust a company that took four years to do this much?” Bejar said. “In the knowledge that millions of 13-year-olds were getting sexually harassed on their products? What does this say about their priorities?”

Bejar said the “key problem” with Meta’s latest safety feature for kids “is that the reporting tool is just not designed for teens,” who likely view “the categories and language” Meta uses as “confusing.”

“Each step of the way, a teen is told that if the content doesn’t violate” Meta’s community standards, “they won’t do anything,” so even if reporting is easy, research shows kids are deterred from reporting.

Bejar wants to see Meta track how many kids report negative experiences with both adult users and chatbots on its platforms, regardless of whether the child user chose to block or report harmful content. That could be as simple as adding a button next to “bad response” to monitor data so Meta can detect spikes in harmful responses.

While Meta is finally taking more action to remove harmful adult users, Bejar warned that advances from chatbots could come across as just as disturbing to young users.

“Put yourself in the position of a teen who got sexually spooked by a chat and then try and report. Which category would you use?” Bejar asked.

Consider that Meta’s Help Center encourages users to report bullying and harassment, which may be one way a young user labels harmful chatbot outputs. Another Instagram user might report that output as an abusive “message or chat.” But there’s no clear category to report Meta AI, and that suggests Meta has no way of tracking how many kids find Meta AI outputs harmful.

Recent reports have shown that even adults can struggle with emotional dependence on a chatbot, which can blur the lines between the online world and reality. Reuters’ special report also documented a 76-year-old man’s accidental death after falling in love with a chatbot, showing how elderly users could be vulnerable to Meta’s romantic chatbots, too.

In particular, lawsuits have alleged that child users with developmental disabilities and mental health issues have formed unhealthy attachments to chatbots that have influenced the children to become violent, begin self-harming, or, in one disturbing case, die by suicide.

Scrutiny will likely remain on chatbot makers as child safety advocates generally push all platforms to take more accountability for the content kids can access online.

Meta’s child safety updates in July came after several state attorneys general accused Meta of “implementing addictive features across its family of apps that have detrimental effects on children’s mental health,” CNBC reported. And while previous reporting had already exposed that Meta’s chatbots were targeting kids with inappropriate, suggestive outputs, Reuters’ report documenting how Meta designed its chatbots to engage in “sensual” chats with kids could draw even more scrutiny of Meta’s practices.

Meta is “still not transparent about the likelihood our kids will experience harm,” Bejar said. “The measure of safety should not be the number of tools or accounts deleted; it should be the number of kids experiencing a harm. It’s very simple.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Meta backtracks on rules letting chatbots be creepy to kids Read More »

meta’s-“ai-superintelligence”-effort-sounds-just-like-its-failed-“metaverse”

Meta’s “AI superintelligence” effort sounds just like its failed “metaverse”


Zuckerberg and company talked up another supposed tech revolution four short years ago.

Artist’s conception of Mark Zuckerberg looking into our glorious AI-powered future. Credit: Facebook

In a memo to employees earlier this week, Meta CEO Mark Zuckerberg shared a vision for a near-future in which “personal [AI] superintelligence for everyone” forms “the beginning of a new era for humanity.” The newly formed Meta Superintelligence Labs—freshly staffed with multiple high-level acquisitions from OpenAI and other AI companies—will spearhead the development of “our next generation of models to get to the frontier in the next year or so,” Zuckerberg wrote.

Reading that memo, I couldn’t help but think of another “vision for the future” Zuckerberg shared not that long ago. At his 2021 Facebook Connect keynote, Zuckerberg laid out his plan for the metaverse, a virtual place where “you’re gonna be able to do almost anything you can imagine” and which would form the basis of “the next version of the Internet.”

“The future of the Internet” of the recent past.

“The future of the Internet” of the recent past. Credit: Meta

Zuckerberg believed in that vision so much at the time that he abandoned the well-known Facebook corporate brand in favor of the new name “Meta.” “I’m going to keep pushing and giving everything I’ve got to make this happen now,” Zuckerberg said at the time. Less than four years later, Zuckerberg seems to now be “giving everything [he’s] got” for a vision of AI “superintelligence,” reportedly offering pay packages of up to $300 million over four years to attract top talent from other AI companies (Meta has since denied those reports, saying, “The size and structure of these compensation packages have been misrepresented all over the place”).

Once again, Zuckerberg is promising that this new technology will revolutionize our lives and replace the ways we currently socialize and work on the Internet. But the utter failure (so far) of those over-the-top promises for the metaverse has us more than a little skeptical of how impactful Zuckerberg’s vision of “personal superintelligence for everyone” will truly be.

Meta-vision

Looking back at Zuckerberg’s 2021 Facebook Connect keynote shows just how hard the company was selling the promise of the metaverse at the time. Zuckerberg said the metaverse would represent an “even more immersive and embodied Internet” where “everything we do online today—connecting socially, entertainment, games, work—is going to be more natural and vivid.”

Mark Zuckerberg lays out his vision for the metaverse in 2021.

“Teleporting around the metaverse is going to be like clicking a link on the Internet,” Zuckerberg promised, and metaverse users would probably switch between “a photorealistic avatar for work, a stylized one for hanging out, and maybe even a fantasy one for gaming.” This kind of personalization would lead to “hundreds of thousands” of artists being able to make a living selling virtual metaverse goods that could be embedded in virtual or real-world environments.

“Lots of things that are physical today, like screens, will just be able to be holograms in the future,” Zuckerberg promised. “You won’t need a physical TV; it’ll just be a one-dollar hologram from some high school kid halfway across the world… we’ll be able to express ourselves in new joyful, completely immersive ways, and that’s going to unlock a lot of amazing new experiences.”

A pre-rendered concept video showed metaverse users playing poker in a zero-gravity space station with robot avatars, then pausing briefly to appreciate some animated 3D art a friend had encountered on the street. Another video showed a young woman teleporting via metaverse avatar to virtually join a friend attending a live concert in Tokyo, then buying virtual merch from the concert at a metaverse afterparty from the comfort of her home. Yet another showed old men playing chess on a park bench, even though one of the players was sitting across the country.

Meta-failure

Fast forward to 2025, and the current reality of Zuckerberg’s metaverse efforts bears almost no resemblance to anything shown or discussed back in 2021. Even enthusiasts describe Meta’s Horizon Worlds as a “depressing” and “lonely” experience characterized by “completely empty” venues. And Meta engineers anonymously gripe about metaverse tools that even employees actively avoid using and a messy codebase that was treated like “a 3D version of a mobile app. “

screen sharing

Even Meta employees reportedly don’t want to work in Horizon Workrooms.

Even Meta employees reportedly don’t want to work in Horizon Workrooms. Credit: Facebook

The creation of a $50 million creator fund seems to have failed to encourage peeved creators to give the metaverse another chance. Things look a bit better if you expand your view past Meta’s own metaverse sandbox; the chaotic world of VR Chat attracts tens of thousands of daily users on Steam alone, for instance. Still, we’re a far cry from the replacement for the mobile Internet that Zuckerberg once trumpeted.

Then again, it’s possible that we just haven’t given Zuckerberg’s version of the metaverse enough time to develop. Back in 2021, he said that “a lot of this is going to be mainstream” within “the next five or 10 years.” That timeframe gives Meta at least a few more years to develop and release its long-teased, lightweight augmented reality glasses that the company showed off last year in the form of a prototype that reportedly still costs $10,000 per unit.

Zuckerberg shows off prototype AR glasses that could change the way we think about “the metaverse.” Credit: Bloomberg / Contributor | Bloomberg

Maybe those glasses will ignite widespread interest in the metaverse in a way that Meta’s bulky, niche VR goggles have utterly failed to. Regardless, after nearly four years and roughly $60 billion in VR-related losses, Meta thus far has surprisingly little to show for its massive investment in Zuckerberg’s metaverse vision.

Our AI future?

When I hear Zuckerberg talk about the promise of AI these days, it’s hard not to hear echoes of his monumental vision for the metaverse from 2021. If anything, Zuckerberg’s vision of our AI-powered future is even more grandiose than his view of the metaverse.

As with the metaverse, Zuckerberg now sees AI forming a replacement for the current version of the Internet. “Do you think in five years we’re just going to be sitting in our feed and consuming media that’s just video?” Zuckerberg asked rhetorically in an April interview with Drawkesh Patel. “No, it’s going to be interactive,” he continued, envisioning something like Instagram Reels, but “you can talk to it, or interact with it, and it talks back, or it changes what it’s doing. Or you can jump into it like a game and interact with it. That’s all going to be AI.”

Mark Zuckerberg talks about all the ways superhuman AI is going to change our lives in the near future.

As with the Metaverse, Zuckerberg sees AI as revolutionizing the way we interact with each other. He envisions “always-on video chats with the AI” incorporating expressions and body language borrowed from the company’s work on the metaverse. And our relationships with AI models are “just going to get more intense as these AIs become more unique, more personable, more intelligent, more spontaneous, more funny, and so forth,” Zuckerberg said. “As the personalization loop kicks in and the AI starts to get to know you better and better, that will just be really compelling.”

Zuckerberg did allow that relationships with AI would “probably not” replace in-person connections, because there are “things that are better about physical connections when you can have them.” At the same time, he said, for the average American who has three friends, AI relationships can fill the “demand” for “something like 15 friends” without the effort of real-world socializing. “People just don’t have as much connection as they want,” Zuckerberg said. “They feel more alone a lot of the time than they would like.”

A toy robot saying

Why chat with real friends on Facebook when you can chat with AI avatars?

Credit: Benj Edwards / Getty Images

Why chat with real friends on Facebook when you can chat with AI avatars? Credit: Benj Edwards / Getty Images

Zuckerberg also sees AI leading to a flourishing of human productivity and creativity in a way even his wildest metaverse imaginings couldn’t match. Zuckerberg said that AI advancement could “lead toward a world of abundance where everyone has these superhuman tools to create whatever they want.” That means personal access to “a super powerful [virtual] software engineer” and AIs that are “solving diseases, advancing science, developing new technology that makes our lives better.”

That will also mean that some companies will be able to get by with fewer employees before too long, Zuckerberg said. In customer service, for instance, “as AI gets better, you’re going to get to a place where AI can handle a bunch of people’s issues,” he said. “Not all of them—maybe 10 years from now it can handle all of them—but thinking about a three- to five-year time horizon, it will be able to handle a bunch.“

In the longer term, Zuckerberg said, AIs will be integrated into our more casual pursuits as well. “If everyone has these superhuman tools to create a ton of different stuff, you’re going to get incredible diversity,” and “the amount of creativity that’s going to be unlocked is going to be massive,” he said. “I would guess the world is going to get a lot funnier, weirder, and quirkier, the way that memes on the Internet have gotten over the last 10 years.”

Compare and contrast

To be sure, there are some important differences between the past promise of the metaverse and the current promise of AI technology. Zuckerberg claims that a billion people use Meta’s AI products monthly, for instance, utterly dwarfing the highest estimates for regular use of “the metaverse” or augmented reality as a whole (even if many AI users seem to balk at paying for regular use of AI tools). Meta coders are also reportedly already using AI coding tools regularly in a way they never did with Meta’s metaverse tools. And people are already developing what they consider meaningful relationships with AI personas, whether that’s in the form of therapists or romantic partners.

Still, there are reasons to be skeptical about the future of AI when current models still routinely hallucinate basic facts, show fundamental issues when attempting reasoning, and struggle with basic tasks like beating a children’s video game. The path from where we are to a supposed “superhuman” AI is not simple or inevitable, despite the handwaving of industry boosters like Zuckerberg.

Artist’s conception of Carmack’s VR avatar waving goodbye to Meta.

Artist’s conception of Carmack’s VR avatar waving goodbye to Meta.

At the 2021 rollout of Meta’s push to develop a metaverse, high-ranking Meta executives like John Carmack were at least up front about the technical and product-development barriers that could get in the way of Zuckerberg’s vision. “Everybody that wants to work on the metaverse talks about the limitless possibilities of it,” Carmack said at the time (before departing the company in late 2022). “But it’s not limitless. It is a challenge to fit things in, but you can make smarter decisions about exactly what is important and then really optimize the heck out of things.”

Today, those kinds of voices of internal skepticism seem in short supply as Meta sets itself up to push AI in the same way it once backed the metaverse. Don’t be surprised, though, if today’s promise that we’re at “the beginning of a new era for humanity” ages about as well as Meta’s former promises about a metaverse where “you’re gonna be able to do almost anything you can imagine.”

Photo of Kyle Orland

Kyle Orland has been the Senior Gaming Editor at Ars Technica since 2012, writing primarily about the business, tech, and culture behind video games. He has journalism and computer science degrees from University of Maryland. He once wrote a whole book about Minesweeper.

Meta’s “AI superintelligence” effort sounds just like its failed “metaverse” Read More »

meta-argues-enshittification-isn’t-real-in-bid-to-toss-ftc-monopoly-case

Meta argues enshittification isn’t real in bid to toss FTC monopoly case

Further, Meta argued that the FTC did not show evidence that users sharing friends-and-family content were shown more ads. Meta noted that it “does not profit by showing more ads to users who do not click on them,” so it only shows more ads to users who click ads.

Meta also insisted that there’s “nothing but speculation” showing that Instagram or WhatsApp would have been better off or grown into rivals had Meta not acquired them.

The company claimed that without Meta’s resources, Instagram may have died off. Meta noted that Instagram co-founder Kevin Systrom testified that his app was “pretty broken and duct-taped” together, making it “vulnerable to spam” before Meta bought it.

Rather than enshittification, what Meta did to Instagram could be considered “a consumer-welfare bonanza,” Meta argued, while dismissing “smoking gun” emails from Mark Zuckerberg discussing buying Instagram to bury it as “legally irrelevant.”

Dismissing these as “a few dated emails,” Meta argued that “efforts to litigate Mr. Zuckerberg’s state of mind before the acquisition in 2012 are pointless.”

“What matters is what Meta did,” Meta argued, which was pump Instagram with resources that allowed it “to ‘thrive’—adding many new features, attracting hundreds of millions and then billions of users, and monetizing with great success.”

In the case of WhatsApp, Meta argued that nobody thinks WhatsApp had any intention to pivot to social media when the founders testified that their goal was to never add social features, preferring to offer a simple, clean messaging app. And Meta disputed any claim that it feared Google might buy WhatsApp as the basis for creating a Facebook rival, arguing that “the sole Meta witness to (supposedly) learn of Google’s acquisition efforts testified that he did not have that worry.”

Meta argues enshittification isn’t real in bid to toss FTC monopoly case Read More »

at-monopoly-trial,-zuckerberg-redefined-social-media-as-texting-with-friends

At monopoly trial, Zuckerberg redefined social media as texting with friends


“The magic of friends has fallen away”

Mark Zuckerberg played up TikTok rivalry at monopoly trial, but judge may not buy it.

The Meta monopoly trial has raised a question that Meta hopes the Federal Trade Commission (FTC) can’t effectively answer: How important is it to use social media to connect with friends and family today?

Connecting with friends was, of course, Facebook’s primary use case as it became the rare social network to hit 1 billion users—not by being acquired by a Big Tech company but based on the strength of its clean interface and the network effects that kept users locked in simply because all the important people in their life chose to be there.

According to the FTC, Meta took advantage of Facebook’s early popularity, and it has since bought out rivals and otherwise cornered the market on personal social networks. Only Snapchat and MeWe (a privacy-focused Facebook alternative) are competitors to Meta platforms, the FTC argues, and social networks like TikTok or YouTube aren’t interchangeable, because those aren’t destinations focused on connecting friends and family.

For Meta CEO Mark Zuckerberg, however, those early days of Facebook bringing old friends back together are apparently over. He took the stand this week to testify that the FTC’s market definition ignores the reality that Meta contends with today, where “the amount that people are sharing with friends on Facebook, especially, has been declining,” CNN reported.

“Even the amount of new friends that people add … I think has been declining,” Zuckerberg said, although he did not indicate how steep the decline is. “I don’t know the exact numbers,” Zuckerberg admitted. Meta’s former chief operating officer, Sheryl Sandberg, also took the stand and reportedly testified that while she was at Meta, “friends and family sharing went way down over time . . . If you have a strategy of targeting friends and family, you’d have serious revenue issues.”

In particular, TikTok’s explosive popularity has shifted the dynamics of social media today, Zuckerberg suggested. For many users, “apps now serve primarily as discovery engines,” Zuckerberg testified, and social interactions increasingly come from sharing fun creator content in private messages, rather than through engaging with a friend or family member’s posts.

That’s why Meta added Reels, Zuckerberg testified, and, more recently, TikTok Shop-like functionality. To stay relevant, Meta had to make its platforms more like TikTok, investing heavily in its discovery algorithm, and even willing to irk loyal Instagram users by turning their perfectly curated square grids into rectangles, Wired noted in a piece probing Meta’s efforts to lure TikTok users to Instagram.

There was seemingly no bridge too far, because Zuckerberg said, “TikTok is still bigger than either Facebook or Instagram, and I don’t like it when our competitors do better than us.” And since Meta has no interest in buying TikTok, due to fears of basing business in China, Big Tech on Trial reported, Meta’s only choice was to TikTok-ify its apps to avoid a mass exodus after Facebook users started declining for the first time in 2022. Committing to this future, the next year, Meta doubled the amount of force-fed filler in Instagram feeds.

Right now, Meta is positioning TikTok as one of Meta’s biggest competitors, with Meta supposedly flagging it a “top priority” and “highly urgent” competitive threat as early as 2018, Zuckerberg said. Further, Zuckerberg testified that while TikTok’s popularity grew, Meta’s “growth slowed down dramatically,” TechCrunch reported. And perhaps most persuasively, when TikTok briefly went dark earlier this year, some TikTokers moved to Instagram, Meta argued, suggesting that some users consider the platforms interchangeable.

If Meta can convince the court that the FTC’s market definition is wrong and that TikTok is Meta’s biggest rival, then Meta’s market share drops below monopolist standards, “undercutting” the FTC’s case, Big Tech on Trial reported.

But are Facebook and Instagram substitutes for TikTok?

Although Meta paints the picture that TikTok users naturally gravitated to Instagram during the TikTok outage, it’s clear that Meta advertised heavily to move them in that direction. There was even a conspiracy theory that Meta had bought TikTok in the hours before TikTok went down, Wired reported, as users noticed Meta banners encouraging them to link their TikTok accounts to Meta platforms. However, even the reported Meta ad blitz seemingly didn’t sway that many TikTok users, as Sensor Tower data at the time apparently indicated that “Instagram and Facebook appeared to receive only a modest increase in daily active users and downloads” during the TikTok outage, Wired reported.

Perhaps a more interesting question that the court may entertain is not where TikTok users go when TikTok is down, but where Instagram or Facebook users turn if they no longer want to use those platforms. If the FTC can argue that people seeking a destination to connect with friends or family wouldn’t substitute TikTok for that purpose, their market definition might fly.

Kenneth Dintzer, a partner at Crowell & Moring and the former lead attorney in the DOJ’s winning Google search monopoly case, told Ars that the chief judge in the case, James Boasberg, made clear at summary judgment that acknowledging Meta’s rivalry with TikTok “doesn’t really answer the question about friends and family.”

So even though Zuckerberg was “pretty persuasive,” his testimony on TikTok may not move the judge much. However, there was one exchange at the trial where Boasberg asked, “How much does it matter if friends are on a particular platform, if friends can share outside of it?” Zuckerberg praised this as a “good question” and “explained that it doesn’t matter much because people can fluidly share across platforms, using each one for its value as a ‘discovery engine,'” Big Tech on Trial reported.

Dintzer noted that Zuckerberg seemed to attempt to float a different theory explaining why TikTok was a valid rival—curiously attempting to redefine “social media” to overcome the judge’s skepticism in considering TikTok a true Meta rival.

Zuckerberg’s theory, Dintzer said, suggests that “if I open up something on TikTok or on YouTube, and I send it to a friend, that is social media.”

But that broad definition could be problematic, since it would suggest that all texting and messaging are social media, Dintzer said.

“That didn’t seem particularly persuasive,” Dintzer said. Although that kind of social sharing is “certainly something that people enjoy,” it still “doesn’t seem to be quite the same thing as posting something on Facebook for your friends and family.”

Another wrinkle that may scramble Meta’s defense is that Meta has publicly declared that its priority is to bring back “OG Facebook” and refresh how friends and family connect on its platforms. Just today, Instagram chief Adam Mosseri announced a new Instagram feature called “blend” that strives to connect friends and family through sharing access to their unique discovery algorithms.

Those initiatives seem like a strategy that fully relies on Meta’s core use case of connecting friends and family (and network effects that Zuckerberg downplayed) to propel engagement that could spike revenue. However, that goal could invite scrutiny, perhaps signaling to the court that Meta still benefits from the alleged monopoly in personal social networking and will only continue locking in users seeking to connect with friends and family.

“The magic of friends has fallen away,” Meta’s blog said, which, despite seeming at odds, could serve as both a tagline for its new “Friends” tab on Facebook and the headline of its defense so far in the monopoly trial.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

At monopoly trial, Zuckerberg redefined social media as texting with friends Read More »

zuckerberg’s-2012-email-dubbed-“smoking-gun”-at-meta-monopoly-trial

Zuckerberg’s 2012 email dubbed “smoking gun” at Meta monopoly trial


FTC’s “entire” monopoly case rests on decade-old emails, Meta argued.

Starting the Federal Trade Commission (FTC) antitrust trial Monday with a bang, Daniel Matheson, the FTC’s lead litigator, flagged a “smoking gun”—a 2012 email where Mark Zuckerberg suggested that Facebook could buy Instagram to “neutralize a potential competitor,” The New York Times reported.

And in “another banger of an email from Zuckerberg,” Brendan Benedict, an antitrust expert monitoring the trial for Big Tech on Trial, posted on X that the Meta CEO wrote, “Messenger isn’t beating WhatsApp. Instagram was growing so much faster than us that we had to buy them for $1 billion… that’s not exactly killing it.”

These messages and others, the FTC hopes to convince the court, provide evidence that Zuckerberg runs Meta by the mantra “it’s better to buy than compete”—seemingly for more than a decade intent on growing the Facebook empire by killing off rivals, allegedly in violation of antitrust law. Another message from Zuckerberg exhibited at trial, Benedict noted on X, suggests Facebook tried to buy yet another rival, Snapchat, for $6 billion.

“We should probably prepare for a leak that we offered $6b… and all the negative [attention] that will come from that,” the Zuckerberg message said.

At the trial, Matheson suggested that “Meta broke the deal” that firms have in the US to compete to succeed, allegedly deciding “that competition was too hard, and it would be easier to buy out their rivals than to compete with them,” the NYT reported. Ultimately, it will be up to the FTC to prove that Meta couldn’t have achieved its dominance today without buying Instagram and WhatsApp (in 2012 and 2014, respectively), while legal experts told the NYT that it is “extremely rare” to unwind mergers approved so many years ago.

Later today, Zuckerberg will take the stand and testify for perhaps seven hours, likely being made to answer for these messages and more. According to the NYT, the FTC will present a paper trail of emails where Zuckerberg and other Meta executives make it clear that acquisitions were intended to remove threats to Facebook’s dominance in the market.

It’s apparent that Meta plans to argue that it doesn’t matter what Zuckerberg or other executives intended when pursuing acquisitions. In a pretrial brief, Meta argued that “the FTC’s case rests almost entirely on emails (many more than a decade old) allegedly expressing competitive concerns” but suggested that this is only “intent” evidence, “without any evidence of anticompetitive effects.”

FTC may force Meta to spin off Instagram, WhatsApp

It is the FTC’s burden to show that Meta’s acquisitions harmed consumers and the market (and those harms outweigh any believable pro-competitive benefits alleged by Meta), but it remains to be seen whether Meta will devote ample time to testifying that “Mark Zuckerberg got it wrong” when describing his rationale for acquisitions, Big Tech on Trial noted.

Meta’s lead lawyer, Mark Hansen, told Law360 that “what people thought at Meta is not really what this case is.” (For those keeping track of who’s who in this case, Hansen apparently once was the boss of James Boasberg, the judge in the case, Big Tech on Trial reported.)

The social media company hopes to convince the court that the FTC’s case is political. So far, Meta has accused the FTC of shifting its market definition while willfully overlooking today’s competitive realities online, simply to punish a tech giant for its success.

In a blog post on Sunday, Meta’s chief legal officer, Jennifer Newstead, accused the FTC of lobbing a “weak case” that “ignores reality.” Meta insists that the FTC has “gerrymandered a fictitious market” to exclude Meta’s actual rivals, like TikTok, X, YouTube, or LinkedIn.

Boasberg will be scrutinizing the market definition, as well as alleged harms, and the FTC will potentially struggle to win him over on the merits of their case. Big Tech on Trial—which suggested that Meta’s acquisitions, if intended to kill off rivals, would be considered “a textbook violation of the antitrust laws”—noted that the court previously told the FTC that the agency had an “uphill climb” in proving its market definition. And because Meta’s social platforms are free, it’s harder to show direct evidence of consumer harms, experts have noted.

Still, for Meta, the stakes are high, as the FTC could pursue a breakup of the company, including requiring Meta to spin off WhatsApp and Instagram. Losing Instagram would hit Meta’s revenue hard, as Instagram is supposed to bring in more than half of its US ad revenue in 2025, eMarketer forecasted last December.

The trial is expected to last eight weeks, but much of the most-anticipated testimony will come early. Facebook’s former chief operating officer, Sheryl Sandberg, as well as Kevin Systrom, co-founder of Instagram, are expected to testify this week.

All unsealed emails and exhibits will eventually be posted on a website jointly managed by the FTC and Meta, but Ars was not yet provided a link or timeline for when the public evidence will be posted online.

Meta mocks FTC’s “ad load theory”

The FTC is arguing that Meta overpaid to acquire Instagram and WhatsApp to maintain an alleged monopoly in the personal social networking market that includes rivals like Snapchat and MeWe, a social networking platform that brands itself as a privacy-focused Facebook alternative.

In opening arguments, the FTC alleged that once competition was eliminated, Meta then degraded the quality of its platforms by limiting user privacy and inundating users with ads.

Meta has defended its acquisitions by arguing that it has improved Instagram and WhatsApp. At trial, Meta’s lawyer Hansen made light of the FTC’s “ad load theory,” stirring laughter in the reportedly packed courtroom, Benedict posted on X.

“If you don’t like an ad, you scroll past it. It takes about a second,” Hansen said.

Meanwhile, Newstead, who reportedly attended opening arguments, argued in her blog that “Instagram and WhatsApp provide a model for what successful acquisitions can achieve: Meta has made Instagram and WhatsApp better, more reliable and more secure through billions of dollars and millions of hours of investment.”

By breaking up these acquisitions, Hansen argued, the FTC would be sending a strong message to startups that “would kill entrepreneurship” by seemingly taking mergers and acquisitions “off the table,” Benedict posted on X.

To defeat the FTC, Meta will likely attempt to broaden the market definition to include more rivals. In support of that, Meta has already pointed to the recent TikTok ban driving TikTok users to Instagram, which allegedly shows the platforms are interchangeable, despite the FTC differentiating TikTok as a video app.

The FTC will likely lean on Meta’s internal documents to show who Meta actually considers rivals. During opening arguments, for example, the FTC reportedly shared a Meta document showing that Meta itself has agreed with the FTC and differentiated Facebook as connecting “friends and family,” while “LinkedIn connects coworkers” and “Nextdoor connects neighbors.”

“Contemporaneous records reveal that Meta and other social media executives understood that users flock to different platforms for different purposes and that Facebook, Instagram, and WhatsApp were specifically designed to operate in a distinct submarket for family and friend connections,” the American Economic Liberties Project, which is partnering with Big Tech on Trial to monitoring the proceedings, said in a press statement.

But Newstead suggested that “evidence of fierce and increasing competition in the market has only grown in the four years since the FTC’s complaint was filed,” and Meta now “faces strong competition in a rapidly shifting tech landscape that includes American and foreign competitors.”

To emphasize the threats to US consumers and businesses, Newstead also invoked the supposed threat to America’s AI leadership if one of the country’s leading tech companies loses momentum at this key moment.

“It’s absurd that the FTC is trying to break up a great American company at the same time the Administration is trying to save Chinese-owned TikTok,” Newstead said. “And, it makes no sense for regulators to try and weaken US companies right at the moment we most need them to invest in winning the competition with China for leadership in AI.”

Trump’s FTC appears unlikely to back down

Zuckerberg has been criticized for his supposed last-ditch attempts to push the Trump administration to pause or toss the FTC’s case. Last month, the CEO visited Trump in the Oval Office to discuss a settlement, Politico reported, apparently worrying officials who don’t want Trump to bail out Meta.

On Monday, the FTC did not appear to be wavering, however, prompting alarm bells in the tech industry.

Patrick Hedger, the director of policy for NetChoice—a trade group that represents Meta and other Big Tech companies—warned that if the FTC undoes Meta’s acquisitions, it would harm innovation and competition while damaging trust in the FTC long-term.

“This bait-and-switch against Meta for acquisitions approved over 10 years ago in the fiercely competitive social media marketplace will have serious ripple effects not only for the US tech industry, but across all American businesses,” Hedger said.

Seemingly accusing Donald Trump’s FTC of pursuing Lina Khan’s alleged agenda against Big Tech, Hedger added that “with Meta at the forefront of open-source AI innovation and a global competitor, the outcome of this trial will have spillover into the entire economy. It will create a fear among businesses that making future, pro-competitive investments could be reversed due to political discontent—not the necessary evidence traditionally required for an anticompetitive claim.”

Big Tech on Trial noted that it’s possible that the FTC could “vote to settle, withdraw, or pause the case.” Last month, Trump fired the two Democrats, eliminating a 3–2 split and ensuring only Republicans are steering the agency for now.

But Trump’s FTC seems determined to proceed in attempts to disrupt Meta’s business. FTC Chair Andrew Ferguson told Fox Business Monday that “antitrust laws can help make sure that no private sector company gets so powerful that it affects our lives in ways that are really bad for all Americans,” and “that’s what this trial beginning today is all about.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Zuckerberg’s 2012 email dubbed “smoking gun” at Meta monopoly trial Read More »