facebook

meta-risks-sanctions-over-“sneaky”-ad-free-plans-confusing-users,-eu-says

Meta risks sanctions over “sneaky” ad-free plans confusing users, EU says

Under pressure —

Consumer laws may change Meta’s ad-free plans before EU’s digital crackdown does.

Meta risks sanctions over “sneaky” ad-free plans confusing users, EU says

The European Commission (EC) has finally taken action to block Meta’s heavily criticized plan to charge a subscription fee to users who value privacy on its platforms.

Surprisingly, this step wasn’t taken under laws like the Digital Services Act (DSA), the Digital Markets Act (DMA), or the General Data Protection Regulation (GDPR).

Instead, the EC announced Monday that Meta risked sanctions under EU consumer laws if it could not resolve key concerns about Meta’s so-called “pay or consent” model.

Meta’s model is seemingly problematic, the commission said, because Meta “requested consumers overnight to either subscribe to use Facebook and Instagram against a fee or to consent to Meta’s use of their personal data to be shown personalized ads, allowing Meta to make revenue out of it.”

Because users were given such short notice, they may have been “exposed to undue pressure to choose rapidly between the two models, fearing that they would instantly lose access to their accounts and their network of contacts,” the EC said.

To protect consumers, the EC joined national consumer protection authorities, sending a letter to Meta requiring the tech giant to propose solutions to resolve the commission’s biggest concerns by September 1.

That Meta’s “pay or consent” model may be “misleading” is a top concern because it uses the term “free” for ad-based plans, even though Meta “can make revenue from using their personal data to show them personalized ads.” It seems that while Meta does not consider giving away personal information to be a cost to users, the EC’s commissioner for justice, Didier Reynders, apparently does.

“Consumers must not be lured into believing that they would either pay and not be shown any ads anymore, or receive a service for free, when, instead, they would agree that the company used their personal data to make revenue with ads,” Reynders said. “EU consumer protection law is clear in this respect. Traders must inform consumers upfront and in a fully transparent manner on how they use their personal data. This is a fundamental right that we will protect.”

Additionally, the EC is concerned that Meta users might be confused about how “to navigate through different screens in the Facebook/Instagram app or web-version and to click on hyperlinks directing them to different parts of the Terms of Service or Privacy Policy to find out how their preferences, personal data, and user-generated data will be used by Meta to show them personalized ads.” They may also find Meta’s “imprecise terms and language” confusing, such as Meta referring to “your info” instead of clearly referring to consumers’ “personal data.”

To resolve the EC’s concerns, Meta may have to give EU users more time to decide if they want to pay to subscribe or consent to personal data collection for targeted ads. Or Meta may have to take more drastic steps by altering language and screens used when securing consent to collect data or potentially even scrapping its “pay or consent” model entirely, as pressure in the EU mounts.

So far, Meta has defended its model against claims that it violates the DMA, the DSA, and the GDPR, and Meta’s spokesperson told Ars that Meta continues to defend the model while facing down the EC’s latest action.

“Subscriptions as an alternative to advertising are a well-established business model across many industries,” Meta’s spokesperson told Ars. “Subscription for no ads follows the direction of the highest court in Europe and we are confident it complies with European regulation.”

Meta’s model is “sneaky,” EC said

Since last year, the social media company has argued that its “subscription for no ads” model was “endorsed” by the highest court in Europe, the Court of Justice of the European Union (CJEU).

However, privacy advocates have noted that this alleged endorsement came following a CJEU case under the GDPR and was only presented as a hypothetical, rather than a formal part of the ruling, as Meta seems to interpret.

What the CJEU said was that “users must be free to refuse individually”—”in the context of” signing up for services—”to give their consent to particular data processing operations not necessary” for Meta to provide such services “without being obliged to refrain entirely from using the service.” That “means that those users are to be offered, if necessary for an appropriate fee, an equivalent alternative not accompanied by such data processing operations,” the CJEU said.

The nuance here may matter when it comes to Meta’s proposed solutions even if the EC accepts the CJEU’s suggestion of an acceptable alternative as setting some sort of legal precedent. Because the consumer protection authorities raised the action due to Meta suddenly changing the consent model for existing users—not “in the context of” signing up for services—Meta may struggle to persuade the EC that existing users weren’t misled and pressured into paying for a subscription or consenting to ads, given how fast Meta’s policy shifted.

Meta risks sanctions if a compromise can’t be reached, the EC said. Under the EU’s Unfair Contract Terms Directive, for example, Meta could be fined up to 4 percent of its annual turnover if consumer protection authorities are unsatisfied with Meta’s proposed solutions.

The EC’s vice president for values and transparency, Věra Jourová, provided a statement in the press release, calling Meta’s abrupt introduction of the “pay or consent” model “sneaky.”

“We are proud of our strong consumer protection laws which empower Europeans to have the right to be accurately informed about changes such as the one proposed by Meta,” Jourová said. “In the EU, consumers are able to make truly informed choices and we now take action to safeguard this right.”

Meta risks sanctions over “sneaky” ad-free plans confusing users, EU says Read More »

meta-tells-court-it-won’t-sue-over-facebook-feed-killing-tool—yet

Meta tells court it won’t sue over Facebook feed-killing tool—yet

Meta tells court it won’t sue over Facebook feed-killing tool—yet

This week, Meta asked a US district court in California to toss a lawsuit filed by a professor, Ethan Zuckerman, who fears that Meta will sue him if he releases a tool that would give Facebook users an automated way to easily remove all content from their feeds.

Zuckerman has alleged that the imminent threat of a lawsuit from Meta has prevented him from releasing Unfollow Everything 2.0, suggesting that a cease-and-desist letter sent to the creator of the original Unfollow Everything substantiates his fears.

He’s hoping the court will find that either releasing his tool would not breach Facebook’s terms of use—which prevent “accessing or collecting data from Facebook ‘using automated means'”—or that those terms conflict with public policy. Among laws that Facebook’s terms allegedly conflict with are the First Amendment, section 230 of the Communications Decency Act, the Computer Fraud and Abuse Act (CFAA), as well as California’s Computer Data Access and Fraud Act (CDAFA) and state privacy laws.

But Meta claimed in its motion to dismiss that Zuckerman’s suit is too premature, mostly because the tool has not yet been built and Meta has not had a chance to review the “non-existent tool” to determine how Unfollow Everything 2.0 might impact its platform or its users.

“Besides bald assertions about how Plaintiff intends Unfollow Everything 2.0 to work and what he plans to do with it, there are no concrete facts that would enable this Court to adjudicate potential legal claims regarding this tool—which, at present, does not even operate in the real world,” Meta argued.

Meta wants all of Zuckerman’s claims to be dismissed, arguing that “adjudicating Plaintiff’s claims would require needless rulings on hypothetical applications of California law, would likely result in duplicative litigation, and would encourage forum shopping.”

At the heart of Meta’s defense is a claim that there’s no telling yet if Zuckerman will ever be able to release the tool, although Zuckerman said he was prepared to finish the build within six weeks of a court win. Last May, Zuckerman told Ars that because Facebook’s functionality could change while the lawsuit is settled, it’s better to wait to finish building the tool because Facebook’s design is always changing.

Meta claimed that Zuckerman can’t confirm if Unfollow Everything 2.0 would work as described in his suit precisely because his findings are based on Facebook’s current interface, and the “process for unfollowing has changed over time and will likely continue to change.”

Further, Meta argued that the original Unfollow Everything performed in a different way—by logging in on behalf of users and automatically unfollowing everything, rather than performing the automated unfollowing when the users themselves log in. Because of that, Meta argued that the new tool may not prompt the same response from Meta.

A senior staff attorney at the Knight Institute who helped draft Zuckerman’s complaint, Ramya Krishnan, told Ars that the two tools operate nearly identically, however.

“Professor Zuckerman’s tool and the original Unfollow Everything work in essentially the same way,” Krishnan told Ars. “They automatically unfollow all of a user’s friends, groups, and pages after the user installs the tool and logs in to Facebook using their web browser.”

Ultimately, Meta claimed that there’s no telling if Meta would even sue over the tool’s automated access to user data, dismissing Zuckerman’s fears as unsubstantiated.

Only when the tool is out in the wild and Facebook is able to determine “actual, concrete facts about how it works in practice” that “may prove problematic” will Meta know if a legal response is needed, Meta claimed. Without reviewing the technical specs, Meta argued, Meta has no way to assess the damages or know if it would sue over a breach of contract, as alleged, or perhaps over other claims not alleged, such as trademark infringement.

Meta tells court it won’t sue over Facebook feed-killing tool—yet Read More »

scotus-nixes-injunction-that-limited-biden-admin-contacts-with-social-networks

SCOTUS nixes injunction that limited Biden admin contacts with social networks

SCOTUS nixes injunction that limited Biden admin contacts with social networks

On Wednesday, the Supreme Court tossed out claims that the Biden administration coerced social media platforms into censoring users by removing COVID-19 and election-related content.

Complaints alleging that high-ranking government officials were censoring conservatives had previously convinced a lower court to order an injunction limiting the Biden administration’s contacts with platforms. But now that injunction has been overturned, re-opening lines of communication just ahead of the 2024 elections—when officials will once again be closely monitoring the spread of misinformation online targeted at voters.

In a 6–3 vote, the majority ruled that none of the plaintiffs suing—including five social media users and Republican attorneys general in Louisiana and Missouri—had standing. They had alleged that the government had “pressured the platforms to censor their speech in violation of the First Amendment,” demanding an injunction to stop any future censorship.

Plaintiffs may have succeeded if they were instead seeking damages for past harms. But in her opinion, Justice Amy Coney Barrett wrote that partly because the Biden administration seemingly stopped influencing platforms’ content policies in 2022, none of the plaintiffs could show evidence of a “substantial risk that, in the near future, they will suffer an injury that is traceable” to any government official. Thus, they did not seem to face “a real and immediate threat of repeated injury,” Barrett wrote.

“Without proof of an ongoing pressure campaign, it is entirely speculative that the platforms’ future moderation decisions will be attributable, even in part,” to government officials, Barrett wrote, finding that an injunction would do little to prevent future censorship.

Instead, plaintiffs’ claims “depend on the platforms’ actions,” Barrett emphasized, “yet the plaintiffs do not seek to enjoin the platforms from restricting any posts or accounts.”

“It is a bedrock principle that a federal court cannot redress ‘injury that results from the independent action of some third party not before the court,'” Barrett wrote.

Barrett repeatedly noted “weak” arguments raised by plaintiffs, none of which could directly link their specific content removals with the Biden administration’s pressure campaign urging platforms to remove vaccine or election misinformation.

According to Barrett, the lower court initially granting the injunction “glossed over complexities in the evidence,” including the fact that “platforms began to suppress the plaintiffs’ COVID-19 content” before the government pressure campaign began. That’s an issue, Barrett said, because standing to sue “requires a threshold showing that a particular defendant pressured a particular platform to censor a particular topic before that platform suppressed a particular plaintiff’s speech on that topic.”

“While the record reflects that the Government defendants played a role in at least some of the platforms’ moderation choices, the evidence indicates that the platforms had independent incentives to moderate content and often exercised their own judgment,” Barrett wrote.

Barrett was similarly unconvinced by arguments that plaintiffs risk platforms removing future content based on stricter moderation policies that were previously coerced by officials.

“Without evidence of continued pressure from the defendants, the platforms remain free to enforce, or not to enforce, their policies—even those tainted by initial governmental coercion,” Barrett wrote.

Judge: SCOTUS “shirks duty” to defend free speech

Justices Clarence Thomas and Neil Gorsuch joined Samuel Alito in dissenting, arguing that “this is one of the most important free speech cases to reach this Court in years” and that the Supreme Court had an “obligation” to “tackle the free speech issue that the case presents.”

“The Court, however, shirks that duty and thus permits the successful campaign of coercion in this case to stand as an attractive model for future officials who want to control what the people say, hear, and think,” Alito wrote.

Alito argued that the evidence showed that while “downright dangerous” speech was suppressed, so was “valuable speech.” He agreed with the lower court that “a far-reaching and widespread censorship campaign” had been “conducted by high-ranking federal officials against Americans who expressed certain disfavored views about COVID-19 on social media.”

“For months, high-ranking Government officials placed unrelenting pressure on Facebook to suppress Americans’ free speech,” Alito wrote. “Because the Court unjustifiably refuses to address this serious threat to the First Amendment, I respectfully dissent.”

At least one plaintiff who opposed masking and vaccines, Jill Hines, was “indisputably injured,” Alito wrote, arguing that evidence showed that she was censored more frequently after officials pressured Facebook into changing their policies.

“Top federal officials continuously and persistently hectored Facebook to crack down on what the officials saw as unhelpful social media posts, including not only posts that they thought were false or misleading but also stories that they did not claim to be literally false but nevertheless wanted obscured,” Alito wrote.

While Barrett and the majority found that platforms were more likely responsible for injury, Alito disagreed, writing that with the threat of antitrust probes or Section 230 amendments, Facebook acted like “a subservient entity determined to stay in the good graces of a powerful taskmaster.”

Alito wrote that the majority was “applying a new and heightened standard” by requiring plaintiffs to “untangle Government-caused censorship from censorship that Facebook might have undertaken anyway.” In his view, it was enough that Hines showed that “one predictable effect of the officials’ action was that Facebook would modify its censorship policies in a way that affected her.”

“When the White House pressured Facebook to amend some of the policies related to speech in which Hines engaged, those amendments necessarily impacted some of Facebook’s censorship decisions,” Alito wrote. “Nothing more is needed. What the Court seems to want are a series of ironclad links.”

“That is regrettable,” Alito said.

SCOTUS nixes injunction that limited Biden admin contacts with social networks Read More »

surgeon-general’s-proposed-social-media-warning-label-for-kids-could-hurt-kids

Surgeon general’s proposed social media warning label for kids could hurt kids

Surgeon general’s proposed social media warning label for kids could hurt kids

US Surgeon General Vivek Murthy wants to put a warning label on social media platforms, alerting young users of potential mental health harms.

“It is time to require a surgeon general’s warning label on social media platforms stating that social media is associated with significant mental health harms for adolescents,” Murthy wrote in a New York Times op-ed published Monday.

Murthy argued that a warning label is urgently needed because the “mental health crisis among young people is an emergency,” and adolescents overusing social media can increase risks of anxiety and depression and negatively impact body image.

Spiking mental health issues for young people began long before the surgeon general declared a youth behavioral health crisis during the pandemic, an April report from a New York nonprofit called the United Health Fund found. Between 2010 and 2022, “adolescents ages 12–17 have experienced the highest year-over-year increase in having a major depressive episode,” the report said. By 2022, 6.7 million adolescents in the US were reporting “suffering from one or more behavioral health condition.”

However, mental health experts have maintained that the science is divided, showing that kids can also benefit from social media depending on how they use it. Murthy’s warning label seems to ignore that tension, prioritizing raising awareness of potential harms even though parents potentially restricting online access due to the proposed label could end up harming some kids. The label also would seemingly fail to acknowledge known risks to young adults, whose brains continue developing after the age of 18.

To create the proposed warning label, Murthy is seeking better data from social media companies that have not always been transparent about studying or publicizing alleged harms to kids on their platforms. Last year, a Meta whistleblower, Arturo Bejar, testified to a US Senate subcommittee that Meta overlooks obvious reforms and “continues to publicly misrepresent the level and frequency of harm that users, especially children, experience” on its platforms Facebook and Instagram.

According to Murthy, the US is past the point of accepting promises from social media companies to make their platforms safer. “We need proof,” Murthy wrote.

“Companies must be required to share all of their data on health effects with independent scientists and the public—currently they do not—and allow independent safety audits,” Murthy wrote, arguing that parents need “assurance that trusted experts have investigated and ensured that these platforms are safe for our kids.”

“A surgeon general’s warning label, which requires congressional action, would regularly remind parents and adolescents that social media has not been proved safe,” Murthy wrote.

Kids need safer platforms, not a warning label

Leaving parents to police kids’ use of platforms is unacceptable, Murthy said, because their efforts are “pitted against some of the best product engineers and most well-resourced companies in the world.”

That is nearly an impossible battle for parents, Murthy argued. If platforms are allowed to ignore harms to kids while pursuing financial gains by developing features that are laser-focused on maximizing young users’ online engagement, platforms will “likely” perpetuate the cycle of problematic use that Murthy described in his op-ed, the American Psychological Association (APA) warned this year.

Downplayed in Murthy’s op-ed, however, is the fact that social media use is not universally harmful to kids and can be beneficial to some, especially children in marginalized groups. Monitoring this tension remains a focal point of the APA’s most recent guidance, which noted that in April 2024 that “society continues to wrestle with ways to maximize the benefits of these platforms while protecting youth from the potential harms associated with them.”

“Psychological science continues to reveal benefits from social media use, as well as risks and opportunities that certain content, features, and functions present to young social media users,” APA reported.

According to the APA, platforms urgently need to enact responsible safety standards that diminish risks without restricting kids’ access to beneficial social media use.

“By early 2024, few meaningful changes to social media platforms had been enacted by industry, and no federal policies had been adopted,” the APA report said. “There remains a need for social media companies to make fundamental changes to their platforms.”

The APA has recommended a range of platform reforms, including limiting infinite scroll, imposing time limits on young users, reducing kids’ push notifications, and adding protections to shield kids from malicious actors.

Bejar agreed with the APA that platforms owe it to parents to make meaningful reforms. His ideal future would see platforms gathering more granular feedback from young users to expose harms and confront them faster. He provided senators with recommendations that platforms could use to “radically improve the experience of our children on social media” without “eliminating the joy and value they otherwise get from using such services” and without “significantly” affecting profits.

Bejar’s reforms included platforms providing young users with open-ended ways to report harassment, abuse, and harmful content that allow users to explain exactly why a contact or content was unwanted—rather than platforms limiting feedback to certain categories they want to track. This could help ensure that companies that strategically limit language in reporting categories don’t obscure the harms and also provide platforms with more information to improve services, Bejar suggested.

By improving feedback mechanisms, Bejar said, platforms could more easily adjust kids’ feeds to stop recommending unwanted content. The APA’s report agreed that this was an obvious area for platform improvement, finding that “the absence of clear and transparent processes for addressing reports of harmful content makes it harder for youth to feel protected or able to get help in the face of harmful content.”

Ultimately, the APA, Bejar, and Murthy all seem to agree that it is important to bring in outside experts to help platforms come up with better solutions, especially as technology advances. The APA warned that “AI-recommended content has the potential to be especially influential and hard to resist” for some of the youngest users online (ages 10–13).

Surgeon general’s proposed social media warning label for kids could hurt kids Read More »

meta-halts-plans-to-train-ai-on-facebook,-instagram-posts-in-eu

Meta halts plans to train AI on Facebook, Instagram posts in EU

Not so fast —

Meta was going to start training AI on Facebook and Instagram posts on June 26.

Meta halts plans to train AI on Facebook, Instagram posts in EU

Meta has apparently paused plans to process mounds of user data to bring new AI experiences to Europe.

The decision comes after data regulators rebuffed the tech giant’s claims that it had “legitimate interests” in processing European Union- and European Economic Area (EEA)-based Facebook and Instagram users’ data—including personal posts and pictures—to train future AI tools.

There’s not much information available yet on Meta’s decision. But Meta’s EU regulator, the Irish Data Protection Commission (DPC), posted a statement confirming that Meta made the move after ongoing discussions with the DPC about compliance with the EU’s strict data privacy laws, including the General Data Protection Regulation (GDPR).

“The DPC welcomes the decision by Meta to pause its plans to train its large language model using public content shared by adults on Facebook and Instagram across the EU/EEA,” the DPC said. “This decision followed intensive engagement between the DPC and Meta. The DPC, in co-operation with its fellow EU data protection authorities, will continue to engage with Meta on this issue.”

The European Center for Digital Rights, known as Noyb, had filed 11 complaints across the EU and intended to file more to stop Meta from moving forward with its AI plans. The DPC initially gave Meta AI the green light to proceed but has now made a U-turn, Noyb said.

Meta’s policy still requires update

In a blog, Meta had previously teased new AI features coming to the EU, including everything from customized stickers for chats and stories to Meta AI, a “virtual assistant you can access to answer questions, generate images, and more.” Meta had argued that training on EU users’ personal data was necessary so that AI services could reflect “the diverse cultures and languages of the European communities who will use them.”

Before the pause, the company had been hoping to rely “on the legal basis of ‘legitimate interests’” to process the data, because it’s needed “to improve AI at Meta.” But Noyb and EU data regulators had argued that Meta’s legal basis did not comply with the GDPR, with the Norwegian Data Protection Authority arguing that “the most natural thing would have been to ask the users for their consent before their posts and images are used in this way.”

Rather than ask for consent, however, Meta had given EU users until June 26 to opt out. Noyb had alleged that in going this route, Meta planned to use “dark patterns” to thwart AI opt-outs in the EU and collect as much data as possible to fuel undisclosed AI technologies. Noyb urgently argued that once users’ data is in the system, “users seem to have no option of ever having it removed.”

Noyb said that the “obvious explanation” for Meta seemingly halting its plans was pushback from EU officials, but the privacy advocacy group also warned EU users that Meta’s privacy policy has not yet been fully updated to reflect the pause.

“We welcome this development but will monitor this closely,” Max Schrems, Noyb chair, said in a statement provided to Ars. “So far there is no official change of the Meta privacy policy, which would make this commitment legally binding. The cases we filed are ongoing and will need a determination.”

Ars was not immediately able to reach Meta for comment.

Meta halts plans to train AI on Facebook, Instagram posts in EU Read More »

robert-f-kennedy-jr.-sues-meta,-citing-chatbot’s-reply-as-evidence-of-shadowban

Robert F. Kennedy Jr. sues Meta, citing chatbot’s reply as evidence of shadowban

Screenshot from the documentary <em>Who Is Bobby Kennedy?</em>” src=”https://cdn.arstechnica.net/wp-content/uploads/2024/05/Who-Is-Bobby-Kennedy-screenshot-via-YouTube-800×422.jpg”></img><figcaption>
<p><a data-height=Enlarge / Screenshot from the documentary Who Is Bobby Kennedy?

In a lawsuit that seems determined to ignore that Section 230 exists, Robert F. Kennedy Jr. has sued Meta for allegedly shadowbanning his million-dollar documentary, Who Is Bobby Kennedy? and preventing his supporters from advocating for his presidential campaign.

According to Kennedy, Meta is colluding with the Biden administration to sway the 2024 presidential election by suppressing Kennedy’s documentary and making it harder to support Kennedy’s candidacy. This allegedly has caused “substantial donation losses,” while also violating the free speech rights of Kennedy, his supporters, and his film’s production company, AV24.

Meta had initially restricted the documentary on Facebook and Instagram but later fixed the issue after discovering that the film was mistakenly flagged by the platforms’ automated spam filters.

But Kennedy’s complaint claimed that Meta is still “brazenly censoring speech” by “continuing to throttle, de-boost, demote, and shadowban the film.” In an exhibit, Kennedy’s lawyers attached screenshots representing “hundreds” of Facebook and Instagram users whom Meta allegedly sent threats, intimidated, and sanctioned after they shared the documentary.

Some of these users remain suspended on Meta platforms, the complaint alleged. Others whose temporary suspensions have been lifted claimed that their posts are still being throttled, though, and Kennedy’s lawyers earnestly insisted that an exchange with Meta’s chatbot proves it.

Two days after the documentary’s release, Kennedy’s team apparently asked the Meta AI assistant, “When users post the link whoisbobbykennedy.com, can their followers see the post in their feeds?”

“I can tell you that the link is currently restricted by Meta,” the chatbot answered.

Chatbots, of course, are notoriously inaccurate sources of information, and Meta AI’s terms of service note this. In a section labeled “accuracy,” Meta warns that chatbot responses “may not reflect accurate, complete, or current information” and should always be verified.

Perhaps more significantly, there is little reason to think that Meta’s chatbot would have access to information about internal content moderation decisions.

Techdirt’s Mike Masnick mocked Kennedy’s reliance on the chatbot in the case. He noted that Kennedy seemed to have no evidence of the alleged shadow-banning, while there’s plenty of evidence that Meta’s spam filters accidentally remove non-violative content all the time.

Meta’s chatbot is “just a probabilistic stochastic parrot, repeating a probable sounding answer to users’ questions,” Masnick wrote. “And these idiots think it’s meaningful evidence. This is beyond embarrassing.”

Neither Meta nor Kennedy’s lawyer, Jed Rubenfeld, responded to Ars’ request to comment.

Robert F. Kennedy Jr. sues Meta, citing chatbot’s reply as evidence of shadowban Read More »

concerns-over-addicted-kids-spur-probe-into-meta-and-its-use-of-dark-patterns

Concerns over addicted kids spur probe into Meta and its use of dark patterns

Protecting the vulnerable —

EU is concerned Meta isn’t doing enough to protect children using its apps.

An iPhone screen displays the app icons for WhatsApp, Messenger, Instagram, and Facebook in a folder titled

Getty Images | Chesnot

Brussels has opened an in-depth probe into Meta over concerns it is failing to do enough to protect children from becoming addicted to social media platforms such as Instagram.

The European Commission, the EU’s executive arm, announced on Thursday it would look into whether the Silicon Valley giant’s apps were reinforcing “rabbit hole” effects, where users get drawn ever deeper into online feeds and topics.

EU investigators will also look into whether Meta, which owns Facebook and Instagram, is complying with legal obligations to provide appropriate age-verification tools to prevent children from accessing inappropriate content.

The probe is the second into the company under the EU’s Digital Services Act. The landmark legislation is designed to police content online, with sweeping new rules on the protection of minors.

It also has mechanisms to force Internet platforms to reveal how they are tackling misinformation and propaganda.

The DSA, which was approved last year, imposes new obligations on very large online platforms with more than 45 million users in the EU. If Meta is found to have broken the law, Brussels can impose fines of up to 6 percent of a company’s global annual turnover.

Repeat offenders can even face bans in the single market as an extreme measure to enforce the rules.

Thierry Breton, commissioner for internal market, said the EU was “not convinced” that Meta “has done enough to comply with the DSA obligations to mitigate the risks of negative effects to the physical and mental health of young Europeans on its platforms Facebook and Instagram.”

“We are sparing no effort to protect our children,” Breton added.

Meta said: “We want young people to have safe, age-appropriate experiences online and have spent a decade developing more than 50 tools and policies designed to protect them. This is a challenge the whole industry is facing, and we look forward to sharing details of our work with the European Commission.”

In the investigation, the commission said it would focus on whether Meta’s platforms were putting in place “appropriate and proportionate measures to ensure a high level of privacy, safety, and security for minors.” It added that it was placing special emphasis on default privacy settings for children.

Last month, the EU opened the first probe into Meta under the DSA over worries the social media giant is not properly curbing disinformation from Russia and other countries.

Brussels is especially concerned whether the social media company’s platforms are properly moderating content from Russian sources that may try to destabilize upcoming elections across Europe.

Meta defended its moderating practices and said it had appropriate systems in place to stop the spread of disinformation on its platforms.

© 2024 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

Concerns over addicted kids spur probe into Meta and its use of dark patterns Read More »

professor-sues-meta-to-allow-release-of-feed-killing-tool-for-facebook

Professor sues Meta to allow release of feed-killing tool for Facebook

Professor sues Meta to allow release of feed-killing tool for Facebook

themotioncloud/Getty Images

Ethan Zuckerman wants to release a tool that would allow Facebook users to control what appears in their newsfeeds. His privacy-friendly browser extension, Unfollow Everything 2.0, is designed to essentially give users a switch to turn the newsfeed on and off whenever they want, providing a way to eliminate or curate the feed.

Ethan Zuckerman, a professor at University of Massachusetts Amherst, is suing Meta to release a tool allowing Facebook users to

Ethan Zuckerman, a professor at University of Massachusetts Amherst, is suing Meta to release a tool allowing Facebook users to “unfollow everything.” (Photo by Lorrie LeJeune)

The tool is nearly ready to be released, Zuckerman told Ars, but the University of Massachusetts Amherst associate professor is afraid that Facebook owner Meta might threaten legal action if he goes ahead. And his fears appear well-founded. In 2021, Meta sent a cease-and-desist letter to the creator of the original Unfollow Everything, Louis Barclay, leading that developer to shut down his tool after thousands of Facebook users had eagerly downloaded it.

Zuckerman is suing Meta, asking a US district court in California to invalidate Meta’s past arguments against developers like Barclay and rule that Meta would have no grounds to sue if he released his tool.

Zuckerman insists that he’s “suing Facebook to make it better.” In picking this unusual legal fight with Meta, the professor—seemingly for the first time ever—is attempting to tip Section 230’s shield away from Big Tech and instead protect third-party developers from giant social media platforms.

To do this, Zuckerman is asking the court to consider a novel Section 230 argument relating to an overlooked provision of the law that Zuckerman believes protects the development of third-party tools that allow users to curate their newsfeeds to avoid objectionable content. His complaint cited case law and argued:

Section 230(c)(2)(B) immunizes from legal liability “a provider of software or enabling tools that filter, screen, allow, or disallow content that the provider or user considers obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.” Through this provision, Congress intended to promote the development of filtering tools that enable users to curate their online experiences and avoid content they would rather not see.

Unfollow Everything 2.0 falls in this “safe harbor,” Zuckerman argues, partly because “the purpose of the tool is to allow users who find the newsfeed objectionable, or who find the specific sequencing of posts within their newsfeed objectionable, to effectively turn off the feed.”

Ramya Krishnan, a senior staff attorney at the Knight Institute who helped draft Zuckerman’s complaint, told Ars that some Facebook users are concerned that the newsfeed “prioritizes inflammatory and sensational speech,” and they “may not want to see that kind of content.” By turning off the feed, Facebook users could choose to use the platform the way it was originally designed, avoiding being served objectionable content by blanking the newsfeed and manually navigating to only the content they want to see.

“Users don’t have to accept Facebook as it’s given to them,” Krishnan said in a press release provided to Ars. “The same statute that immunizes Meta from liability for the speech of its users gives users the right to decide what they see on the platform.”

Zuckerman, who considers himself “old to the Internet,” uses Facebook daily and even reconnected with and began dating his now-wife on the platform. He has a “soft spot” in his heart for Facebook and still finds the platform useful to keep in touch with friends and family.

But while he’s “never been in the ‘burn it all down’ camp,” he has watched social media evolve to give users less control over their feeds and believes “that the dominance of a small number of social media companies tends to create the illusion that the business model adopted by them is inevitable,” his complaint said.

Professor sues Meta to allow release of feed-killing tool for Facebook Read More »

meta-relaxes-“incoherent”-policy-requiring-removal-of-ai-videos

Meta relaxes “incoherent” policy requiring removal of AI videos

Meta relaxes “incoherent” policy requiring removal of AI videos

On Friday, Meta announced policy updates to stop censoring harmless AI-generated content and instead begin “labeling a wider range of video, audio, and image content as ‘Made with AI.'”

Meta’s policy updates came after deciding not to remove a controversial post edited to show President Joe Biden seemingly inappropriately touching his granddaughter’s chest, with a caption calling Biden a “pedophile.” The Oversight Board had agreed with Meta’s decision to leave the post online while noting that Meta’s current manipulated media policy was too “narrow,” “incoherent,” and “confusing to users.”

Previously, Meta would only remove “videos that are created or altered by AI to make a person appear to say something they didn’t say.” The Oversight Board warned that this policy failed to address other manipulated media, including “cheap fakes,” manipulated audio, or content showing people doing things they’d never done.

“We agree with the Oversight Board’s argument that our existing approach is too narrow since it only covers videos that are created or altered by AI to make a person appear to say something they didn’t say,” Monika Bickert, Meta’s vice president of content policy, wrote in a blog. “As the Board noted, it’s equally important to address manipulation that shows a person doing something they didn’t do.”

Starting in May 2024, Meta will add “Made with AI” labels to any content detected as AI-generated, as well as to any content that users self-disclose as AI-generated.

Meta’s Oversight Board had also warned that Meta removing AI-generated videos that did not directly violate platforms’ community standards was threatening to “unnecessarily risk restricting freedom of expression.” Moving forward, Meta will stop censoring content that doesn’t violate community standards, agreeing that a “less restrictive” approach to manipulated media by adding labels is better.

“If we determine that digitally created or altered images, video, or audio create a particularly high risk of materially deceiving the public on a matter of importance, we may add a more prominent label so people have more information and context,” Bickert wrote. “This overall approach gives people more information about the content so they can better assess it and so they will have context if they see the same content elsewhere.”

Meta confirmed that, in July, it will stop censoring AI-generated content that doesn’t violate rules restricting things like voter interference, bullying and harassment, violence, and incitement.

“This timeline gives people time to understand the self-disclosure process before we stop removing the smaller subset of manipulated media,” Bickert explained in the blog.

Finally, Meta adopted the Oversight Board’s recommendation to “clearly define in a single unified Manipulated Media policy the harms it aims to prevent—beyond users being misled—such as preventing interference with the right to vote and to participate in the conduct of public affairs.”

The Oversight Board issued a statement provided to Ars, saying that members “are pleased that Meta will begin labeling a wider range of video, audio, and image content as ‘made with AI’ when they detect AI image indicators or when people indicate they have uploaded AI content.”

“This will provide people with greater context and transparency for more types of manipulated media, while also removing posts which violate Meta’s rules in other ways,” the Oversight Board said.

Meta relaxes “incoherent” policy requiring removal of AI videos Read More »

facebook-let-netflix-see-user-dms,-quit-streaming-to-keep-netflix-happy:-lawsuit

Facebook let Netflix see user DMs, quit streaming to keep Netflix happy: Lawsuit

A promotional image for Sorry for Your Loss, with Elizabeth Olsen

Enlarge / A promotional image for Sorry for Your Loss, which was a Facebook Watch original scripted series.

Last April, Meta revealed that it would no longer support original shows, like Jada Pinkett Smith’s Red Table Talk talk show, on Facebook Watch. Meta’s streaming business that was once viewed as competition for the likes of YouTube and Netflix is effectively dead now; Facebook doesn’t produce original series, and Facebook Watch is no longer available as a video-streaming app.

The streaming business’ demise has seemed related to cost cuts at Meta that have also included layoffs. However, recently unsealed court documents in an antitrust suit against Meta [PDF] claim that Meta has squashed its streaming dreams in order to appease one of its biggest ad customers: Netflix.

Facebook allegedly gave Netflix creepy privileges

As spotted via Gizmodo, a letter was filed on April 14 in relation to a class-action antitrust suit that was filed by Meta customers, accusing Meta of anti-competitive practices that harm social media competition and consumers. The letter, made public Saturday, asks a court to have Reed Hastings, Netflix’s founder and former CEO, respond to a subpoena for documents that plaintiffs claim are relevant to the case. The original complaint filed in December 2020 [PDF] doesn’t mention Netflix beyond stating that Facebook “secretly signed Whitelist and Data sharing agreements” with Netflix, along with “dozens” of other third-party app developers. The case is still ongoing.

The letter alleges that Netflix’s relationship with Facebook was remarkably strong due to the former’s ad spend with the latter and that Hastings directed “negotiations to end competition in streaming video” from Facebook.

One of the first questions that may come to mind is why a company like Facebook would allow Netflix to influence such a major business decision. The litigation claims the companies formed a lucrative business relationship that included Facebook allegedly giving Netflix access to Facebook users’ private messages:

By 2013, Netflix had begun entering into a series of “Facebook Extended API” agreements, including a so-called “Inbox API” agreement that allowed Netflix programmatic access to Facebook’s users’ private message inboxes, in exchange for which Netflix would “provide to FB a written report every two weeks that shows daily counts of recommendation sends and recipient clicks by interface, initiation surface, and/or implementation variant (e.g., Facebook vs. non-Facebook recommendation recipients). … In August 2013, Facebook provided Netflix with access to its so-called “Titan API,” a private API that allowed a whitelisted partner to access, among other things, Facebook users’ “messaging app and non-app friends.”

Meta said it rolled out end-to-end encryption “for all personal chats and calls on Messenger and Facebook” in December. And in 2018, Facebook told Vox that it doesn’t use private messages for ad targeting. But a few months later, The New York Times, citing “hundreds of pages of Facebook documents,” reported that Facebook “gave Netflix and Spotify the ability to read Facebook users’ private messages.”

Meta didn’t respond to Ars Technica’s request for comment. The company told Gizmodo that it has standard agreements with Netflix currently but didn’t answer the publication’s specific questions.

Facebook let Netflix see user DMs, quit streaming to keep Netflix happy: Lawsuit Read More »

facebook-secretly-spied-on-snapchat-usage-to-confuse-advertisers,-court-docs-say

Facebook secretly spied on Snapchat usage to confuse advertisers, court docs say

“I can’t think of a good argument for why this is okay” —

Zuckerberg told execs to “figure out” how to spy on encrypted Snapchat traffic.

Facebook secretly spied on Snapchat usage to confuse advertisers, court docs say

Unsealed court documents have revealed more details about a secret Facebook project initially called “Ghostbusters,” designed to sneakily access encrypted Snapchat usage data to give Facebook a leg up on its rival, just when Snapchat was experiencing rapid growth in 2016.

The documents were filed in a class-action lawsuit from consumers and advertisers, accusing Meta of anticompetitive behavior that blocks rivals from competing in the social media ads market.

“Whenever someone asks a question about Snapchat, the answer is usually that because their traffic is encrypted, we have no analytics about them,” Facebook CEO Mark Zuckerberg (who has since rebranded his company as Meta) wrote in a 2016 email to Javier Olivan.

“Given how quickly they’re growing, it seems important to figure out a new way to get reliable analytics about them,” Zuckerberg continued. “Perhaps we need to do panels or write custom software. You should figure out how to do this.”

At the time, Olivan was Facebook’s head of growth, but now he’s Meta’s chief operating officer. He responded to Zuckerberg’s email saying that he would have the team from Onavo—a controversial traffic-analysis app acquired by Facebook in 2013—look into it.

Olivan told the Onavo team that he needed “out of the box thinking” to satisfy Zuckerberg’s request. He “suggested potentially paying users to ‘let us install a really heavy piece of software'” to intercept users’ Snapchat data, a court document shows.

What the Onavo team eventually came up with was a project internally known as “Ghostbusters,” an obvious reference to Snapchat’s logo featuring a white ghost. Later, as the project grew to include other Facebook rivals, including YouTube and Amazon, the project was called the “In-App Action Panel” (IAAP).

The IAAP program’s purpose was to gather granular insights into users’ engagement with rival apps to help Facebook develop products as needed to stay ahead of competitors. For example, two months after Zuckerberg’s 2016 email, Meta launched Stories, a Snapchat copycat feature, on Instagram, which the Motley Fool noted rapidly became a key ad revenue source for Meta.

In an email to Olivan, the Onavo team described the “technical solution” devised to help Zuckerberg figure out how to get reliable analytics about Snapchat users. It worked by “develop[ing] ‘kits’ that can be installed on iOS and Android that intercept traffic for specific sub-domains, allowing us to read what would otherwise be encrypted traffic so we can measure in-app usage,” the Onavo team said.

Olivan was told that these so-called “kits” used a “man-in-the-middle” attack typically employed by hackers to secretly intercept data passed between two parties. Users were recruited by third parties who distributed the kits “under their own branding” so that they wouldn’t connect the kits to Onavo unless they used a specialized tool like Wireshark to analyze the kits. TechCrunch reported in 2019 that sometimes teens were paid to install these kits. After that report, Facebook promptly shut down the project.

This “man-in-the-middle” tactic, consumers and advertisers suing Meta have alleged, “was not merely anticompetitive, but criminal,” seemingly violating the Wiretap Act. It was used to snoop on Snapchat starting in 2016, on YouTube from 2017 to 2018, and on Amazon in 2018, relying on creating “fake digital certificates to impersonate trusted Snapchat, YouTube, and Amazon analytics servers to redirect and decrypt secure traffic from those apps for Facebook’s strategic analysis.”

Ars could not reach Snapchat, Google, or Amazon for comment.

Facebook allegedly sought to confuse advertisers

Not everyone at Facebook supported the IAAP program. “The company’s highest-level engineering executives thought the IAAP Program was a legal, technical, and security nightmare,” another court document said.

Pedro Canahuati, then-head of security engineering, warned that incentivizing users to install the kits did not necessarily mean that users understood what they were consenting to.

“I can’t think of a good argument for why this is okay,” Canahuati said. “No security person is ever comfortable with this, no matter what consent we get from the general public. The general public just doesn’t know how this stuff works.”

Mike Schroepfer, then-chief technology officer, argued that Facebook wouldn’t want rivals to employ a similar program analyzing their encrypted user data.

“If we ever found out that someone had figured out a way to break encryption on [WhatsApp] we would be really upset,” Schroepfer said.

While the unsealed emails detailing the project have recently raised eyebrows, Meta’s spokesperson told Ars that “there is nothing new here—this issue was reported on years ago. The plaintiffs’ claims are baseless and completely irrelevant to the case.”

According to Business Insider, advertisers suing said that Meta never disclosed its use of Onavo “kits” to “intercept rivals’ analytics traffic.” This is seemingly relevant to their case alleging anticompetitive behavior in the social media ads market, because Facebook’s conduct, allegedly breaking wiretapping laws, afforded Facebook an opportunity to raise its ad rates “beyond what it could have charged in a competitive market.”

Since the documents were unsealed, Meta has responded with a court filing that said: “Snapchat’s own witness on advertising confirmed that Snap cannot ‘identify a single ad sale that [it] lost from Meta’s use of user research products,’ does not know whether other competitors collected similar information, and does not know whether any of Meta’s research provided Meta with a competitive advantage.”

This conflicts with testimony from a Snapchat executive, who alleged that the project “hamper[ed] Snap’s ability to sell ads” by causing “advertisers to not have a clear narrative differentiating Snapchat from Facebook and Instagram.” Both internally and externally, “the intelligence Meta gleaned from this project was described” as “devastating to Snapchat’s ads business,” a court filing said.

Facebook secretly spied on Snapchat usage to confuse advertisers, court docs say Read More »

apple,-google,-and-meta-are-failing-dma-compliance,-eu-suspects

Apple, Google, and Meta are failing DMA compliance, EU suspects

EU Commissioner for Internal Market Thierry Breton talks to media about non-compliance investigations against Google, Apple, and Meta under the Digital Markets Act (DMA).

Enlarge / EU Commissioner for Internal Market Thierry Breton talks to media about non-compliance investigations against Google, Apple, and Meta under the Digital Markets Act (DMA).

Not even three weeks after the European Union’s Digital Markets Act (DMA) took effect, the European Commission (EC) announced Monday that it is already probing three out of six gatekeepers—Apple, Google, and Meta—for suspected non-compliance.

Apple will need to prove that changes to its app store and existing user options to swap out default settings easily are sufficient to comply with the DMA.

Similarly, Google’s app store rules will be probed, as well as any potentially shady practices unfairly preferencing its own services—like Google Shopping and Hotels—in search results.

Finally, Meta’s “Subscription for No Ads” option—allowing Facebook and Instagram users to opt out of personalized ad targeting for a monthly fee—may not fly under the DMA. Even if Meta follows through on its recent offer to slash these fees by nearly 50 percent, the model could be deemed non-compliant.

“The DMA is very clear: gatekeepers must obtain users’ consent to use their personal data across different services,” the EC’s commissioner for internal market, Thierry Breton, said Monday. “And this consent must be free!”

In total, the EC announced five investigations: two against Apple, two against Google, and one against Meta.

“We suspect that the suggested solutions put forward by the three companies do not fully comply with the DMA,” antitrust chief Margrethe Vestager said, ordering companies to “retain certain documents” viewed as critical to assessing evidence in the probe.

The EC’s investigations are expected to conclude within one year. If tech companies are found non-compliant, they risk fines of up to 10 percent of total worldwide turnover. Any repeat violations could spike fines to 20 percent.

“Moreover, in case of systematic infringements, the Commission may also adopt additional remedies, such as obliging a gatekeeper to sell a business or parts of it or banning the gatekeeper from acquisitions of additional services related to the systemic non-compliance,” the EC’s announcement said.

In addition to probes into Apple, Google, and Meta, the EC will scrutinize Apple’s fee structure for app store alternatives and send retention orders to Amazon and Microsoft. That makes ByteDance the only gatekeeper so far to escape “investigatory steps” as the EU fights to enforce the DMA’s strict standards. (ByteDance continues to contest its gatekeeper status.)

“These are the cases where we already have concrete evidence of possible non-compliance,” Breton said. “And this in less than 20 days of DMA implementation. But our monitoring and investigative work of course doesn’t stop here,” Breton said. “We may have to open other non-compliance cases soon.

Google and Apple have both issued statements defending their current plans for DMA compliance.

“To comply with the Digital Markets Act, we have made significant changes to the way our services operate in Europe,” Google’s competition director Oliver Bethell told Ars, promising to “continue to defend our approach in the coming months.”

“We’re confident our plan complies with the DMA, and we’ll continue to constructively engage with the European Commission as they conduct their investigations,” Apple’s spokesperson told Ars. “Teams across Apple have created a wide range of new developer capabilities, features, and tools to comply with the regulation. At the same time, we’ve introduced protections to help reduce new risks to the privacy, quality, and security of our EU users’ experience. Throughout, we’ve demonstrated flexibility and responsiveness to the European Commission and developers, listening and incorporating their feedback.”

A Meta spokesperson told Ars that Meta “designed Subscription for No Ads to address several overlapping regulatory obligations, including the DMA,” promising to comply with the DMA while arguing that “subscriptions as an alternative to advertising are a well-established business model across many industries.”

The EC’s announcement came after all designated gatekeepers were required to submit DMA compliance reports and scheduled public workshops to discuss DMA compliance. Those workshops conclude tomorrow with Microsoft and appear to be partly driving the EC’s decision to probe Apple, Google, and Meta.

“Stakeholders provided feedback on the compliance solutions offered,” Vestager said. “Their feedback tells us that certain compliance measures fail to achieve their objectives and fall short of expectations.”

Apple and Google app stores probed

Under the DMA, “gatekeepers can no longer prevent their business users from informing their users within the app about cheaper options outside the gatekeeper’s ecosystem,” Vestager said. “That is called anti-steering and is now forbidden by law.”

Stakeholders told the EC that Apple’s and Google’s fee structures appear to “go against” the DMA’s “free of charge” requirement, Vestager said, because companies “still charge various recurring fees and still limit steering.”

This feedback pushed the EC to launch its first two probes under the DMA against Apple and Google.

“We will investigate to what extent these fees and limitations defeat the purpose of the anti-steering provision and by that, limit consumer choice,” Vestager said.

These probes aren’t the end of Apple’s potential app store woes in the EU, either. Breton said that the EC has “many questions on Apple’s new business model” for the app store. These include “questions on the process that Apple used for granting and terminating membership of” its developer program, following a scandal where Epic Games’ account was briefly terminated.

“We also have questions on the fee structure and several other aspects of the business model,” Breton said, vowing to “check if they allow for real opportunities for app developers in line with the letter and the spirit of the DMA.”

Apple, Google, and Meta are failing DMA compliance, EU suspects Read More »