online advertising

facebook-secretly-spied-on-snapchat-usage-to-confuse-advertisers,-court-docs-say

Facebook secretly spied on Snapchat usage to confuse advertisers, court docs say

“I can’t think of a good argument for why this is okay” —

Zuckerberg told execs to “figure out” how to spy on encrypted Snapchat traffic.

Facebook secretly spied on Snapchat usage to confuse advertisers, court docs say

Unsealed court documents have revealed more details about a secret Facebook project initially called “Ghostbusters,” designed to sneakily access encrypted Snapchat usage data to give Facebook a leg up on its rival, just when Snapchat was experiencing rapid growth in 2016.

The documents were filed in a class-action lawsuit from consumers and advertisers, accusing Meta of anticompetitive behavior that blocks rivals from competing in the social media ads market.

“Whenever someone asks a question about Snapchat, the answer is usually that because their traffic is encrypted, we have no analytics about them,” Facebook CEO Mark Zuckerberg (who has since rebranded his company as Meta) wrote in a 2016 email to Javier Olivan.

“Given how quickly they’re growing, it seems important to figure out a new way to get reliable analytics about them,” Zuckerberg continued. “Perhaps we need to do panels or write custom software. You should figure out how to do this.”

At the time, Olivan was Facebook’s head of growth, but now he’s Meta’s chief operating officer. He responded to Zuckerberg’s email saying that he would have the team from Onavo—a controversial traffic-analysis app acquired by Facebook in 2013—look into it.

Olivan told the Onavo team that he needed “out of the box thinking” to satisfy Zuckerberg’s request. He “suggested potentially paying users to ‘let us install a really heavy piece of software'” to intercept users’ Snapchat data, a court document shows.

What the Onavo team eventually came up with was a project internally known as “Ghostbusters,” an obvious reference to Snapchat’s logo featuring a white ghost. Later, as the project grew to include other Facebook rivals, including YouTube and Amazon, the project was called the “In-App Action Panel” (IAAP).

The IAAP program’s purpose was to gather granular insights into users’ engagement with rival apps to help Facebook develop products as needed to stay ahead of competitors. For example, two months after Zuckerberg’s 2016 email, Meta launched Stories, a Snapchat copycat feature, on Instagram, which the Motley Fool noted rapidly became a key ad revenue source for Meta.

In an email to Olivan, the Onavo team described the “technical solution” devised to help Zuckerberg figure out how to get reliable analytics about Snapchat users. It worked by “develop[ing] ‘kits’ that can be installed on iOS and Android that intercept traffic for specific sub-domains, allowing us to read what would otherwise be encrypted traffic so we can measure in-app usage,” the Onavo team said.

Olivan was told that these so-called “kits” used a “man-in-the-middle” attack typically employed by hackers to secretly intercept data passed between two parties. Users were recruited by third parties who distributed the kits “under their own branding” so that they wouldn’t connect the kits to Onavo unless they used a specialized tool like Wireshark to analyze the kits. TechCrunch reported in 2019 that sometimes teens were paid to install these kits. After that report, Facebook promptly shut down the project.

This “man-in-the-middle” tactic, consumers and advertisers suing Meta have alleged, “was not merely anticompetitive, but criminal,” seemingly violating the Wiretap Act. It was used to snoop on Snapchat starting in 2016, on YouTube from 2017 to 2018, and on Amazon in 2018, relying on creating “fake digital certificates to impersonate trusted Snapchat, YouTube, and Amazon analytics servers to redirect and decrypt secure traffic from those apps for Facebook’s strategic analysis.”

Ars could not reach Snapchat, Google, or Amazon for comment.

Facebook allegedly sought to confuse advertisers

Not everyone at Facebook supported the IAAP program. “The company’s highest-level engineering executives thought the IAAP Program was a legal, technical, and security nightmare,” another court document said.

Pedro Canahuati, then-head of security engineering, warned that incentivizing users to install the kits did not necessarily mean that users understood what they were consenting to.

“I can’t think of a good argument for why this is okay,” Canahuati said. “No security person is ever comfortable with this, no matter what consent we get from the general public. The general public just doesn’t know how this stuff works.”

Mike Schroepfer, then-chief technology officer, argued that Facebook wouldn’t want rivals to employ a similar program analyzing their encrypted user data.

“If we ever found out that someone had figured out a way to break encryption on [WhatsApp] we would be really upset,” Schroepfer said.

While the unsealed emails detailing the project have recently raised eyebrows, Meta’s spokesperson told Ars that “there is nothing new here—this issue was reported on years ago. The plaintiffs’ claims are baseless and completely irrelevant to the case.”

According to Business Insider, advertisers suing said that Meta never disclosed its use of Onavo “kits” to “intercept rivals’ analytics traffic.” This is seemingly relevant to their case alleging anticompetitive behavior in the social media ads market, because Facebook’s conduct, allegedly breaking wiretapping laws, afforded Facebook an opportunity to raise its ad rates “beyond what it could have charged in a competitive market.”

Since the documents were unsealed, Meta has responded with a court filing that said: “Snapchat’s own witness on advertising confirmed that Snap cannot ‘identify a single ad sale that [it] lost from Meta’s use of user research products,’ does not know whether other competitors collected similar information, and does not know whether any of Meta’s research provided Meta with a competitive advantage.”

This conflicts with testimony from a Snapchat executive, who alleged that the project “hamper[ed] Snap’s ability to sell ads” by causing “advertisers to not have a clear narrative differentiating Snapchat from Facebook and Instagram.” Both internally and externally, “the intelligence Meta gleaned from this project was described” as “devastating to Snapchat’s ads business,” a court filing said.

Facebook secretly spied on Snapchat usage to confuse advertisers, court docs say Read More »

data-broker-allegedly-selling-de-anonymized-info-to-face-ftc-lawsuit-after-all

Data broker allegedly selling de-anonymized info to face FTC lawsuit after all

Data broker allegedly selling de-anonymized info to face FTC lawsuit after all

The Federal Trade Commission has succeeded in keeping alive its first federal court case against a geolocation data broker that’s allegedly unfairly selling large quantities of data in violation of the FTC Act.

On Saturday, US District Judge Lynn Winmill denied Kochava’s motion to dismiss an amended FTC complaint, which he said plausibly argued that “Kochava’s data sales invade consumers’ privacy and expose them to risks of secondary harms by third parties.”

Winmill’s ruling reversed a dismissal of the FTC’s initial complaint, which the court previously said failed to adequately allege that Kochava’s data sales cause or are likely to cause a “substantial” injury to consumers.

The FTC has accused Kochava of selling “a substantial amount of data obtained from millions of mobile devices across the world”—allegedly combining precise geolocation data with a “staggering amount of sensitive and identifying information” without users’ knowledge or informed consent. This data, the FTC alleged, “is not anonymized and is linked or easily linkable to individual consumers” without mining “other sources of data.”

Kochava’s data sales allegedly allow its customers—whom the FTC noted often pay tens of thousands of dollars monthly—to target specific individuals by combining Kochava data sets. Using just Kochava data, marketers can create “highly granular” portraits of ad targets such as “a woman who visits a particular building, the woman’s name, email address, and home address, and whether the woman is African-American, a parent (and if so, how many children), or has an app identifying symptoms of cancer on her phone.” Just one of Kochava’s databases “contains ‘comprehensive profiles of individual consumers,’ with up to ‘300 data points’ for ‘over 300 million unique individuals,'” the FTC reported.

This harms consumers, the FTC alleged, in “two distinct ways”—by invading their privacy and by causing “an increased risk of suffering secondary harms, such as stigma, discrimination, physical violence, and emotional distress.”

In its amended complaint, the FTC overcame deficiencies in its initial complaint by citing specific examples of consumers already known to have been harmed by brokers sharing sensitive data without their consent. That included a Catholic priest who resigned after he was outed by a group using precise mobile geolocation data to track his personal use of Grindr and his movements to “LGBTQ+-associated locations.” The FTC also pointed to invasive practices by journalists using precise mobile geolocation data to identify and track military and law enforcement officers over time, as well as data brokers tracking “abortion-minded women” who visited reproductive health clinics to target them with ads about abortion and alternatives to abortion.

“Kochava’s practices intrude into the most private areas of consumers’ lives and cause or are likely to cause substantial injury to consumers,” the FTC’s amended complaint said.

The FTC is seeking a permanent injunction to stop Kochava from allegedly selling sensitive data without user consent.

Kochava considers the examples of consumer harms in the FTC’s amended complaint as “anecdotes” disconnected from its own activities. The data broker was seemingly so confident that Winmill would agree to dismiss the FTC’s amended complaint that the company sought sanctions against the FTC for what it construed as a “baseless” filing. According to Kochava, many of the FTC’s allegations were “knowingly false.”

Ultimately, the court found no evidence that the FTC’s complaints were baseless. Instead of dismissing the case and ordering the FTC to pay sanctions, Winmill wrote in his order that Kochava’s motion to dismiss “misses the point” of the FTC’s filing, which was to allege that Kochava’s data sales are “likely” to cause alleged harms. Because the FTC had “significantly” expanded factual allegations, the agency “easily” satisfied the plausibility standard to allege substantial harms were likely, Winmill said.

Kochava CEO and founder Charles Manning said in a statement provided to Ars that Kochava “expected” Winmill’s ruling and is “confident” that Kochava “will prevail on the merits.”

“This case is really about the FTC attempting to make an end-run around Congress to create data privacy law,” Manning said. “The FTC’s salacious hypotheticals in its amended complaint are mere scare tactics. Kochava has always operated consistently and proactively in compliance with all rules and laws, including those specific to privacy.”

In a press release announcing the FTC lawsuit in 2022, the director of the FTC’s Bureau of Consumer Protection, Samuel Levine, said that the FTC was determined to halt Kochava’s allegedly harmful data sales.

“Where consumers seek out health care, receive counseling, or celebrate their faith is private information that shouldn’t be sold to the highest bidder,” Levine said. “The FTC is taking Kochava to court to protect people’s privacy and halt the sale of their sensitive geolocation information.”

Data broker allegedly selling de-anonymized info to face FTC lawsuit after all Read More »

patreon:-blocking-platforms-from-sharing-user-video-data-is-unconstitutional

Patreon: Blocking platforms from sharing user video data is unconstitutional

Patreon: Blocking platforms from sharing user video data is unconstitutional

Patreon, a monetization platform for content creators, has asked a federal judge to deem unconstitutional a rarely invoked law that some privacy advocates consider one of the nation’s “strongest protections of consumer privacy against a specific form of data collection.” Such a ruling would end decades that the US spent carefully shielding the privacy of millions of Americans’ personal video viewing habits.

The Video Privacy Protection Act (VPPA) blocks businesses from sharing data with third parties on customers’ video purchases and rentals. At a minimum, the VPPA requires written consent each time a business wants to share this sensitive video data—including the title, description, and, in most cases, the subject matter.

The VPPA was passed in 1988 in response to backlash over a reporter sharing the video store rental history of a judge, Robert Bork, who had been nominated to the Supreme Court by Ronald Reagan. The report revealed that Bork apparently liked spy thrillers and British costume dramas and suggested that maybe the judge had a family member who dug John Hughes movies.

Although the videos that Bork rented “revealed nothing particularly salacious” about the judge, the intent of reporting the “Bork Tapes” was to confront the judge “with his own vulnerability to privacy harms” during a time when the Supreme Court nominee had “criticized the constitutional right to privacy” as “a loose canon in the law,” Harvard Law Review noted.

Even though no harm was caused by sharing the “Bork Tapes,” policymakers on both sides of the aisle agreed that First Amendment protections ought to safeguard the privacy of people’s viewing habits, or else risk chilling their speech by altering their viewing habits. The US government has not budged on this stance since, supporting a lawsuit filed in 2022 by Patreon users who claimed that while no harms were caused, damages are owed after Patreon allegedly violated the VPPA by sharing data on videos they watched on the platform with Facebook through Meta Pixel without users’ written consent.

“Restricting the ability of those who possess a consumer’s video purchase, rental, or request history to disclose such information directly advances the goal of keeping that information private and protecting consumers’ intellectual freedom,” the Department of Justice’s brief said.

The Meta Pixel is a piece of code used by companies like Patreon to better target content to users by tracking their activity and monitoring conversions on Meta platforms. “In simplest terms,” Patreon users said in an amended complaint, “the Pixel allows Meta to know what video content one of its users viewed on Patreon’s website.”

The Pixel is currently at the center of a pile of privacy lawsuits, where people have accused various platforms of using the Pixel to covertly share sensitive data without users’ consent, including health and financial data.

Several lawsuits have specifically lobbed VPPA claims, which users have argued validates the urgency of retaining the VPPA protections that Patreon now seeks to strike. The DOJ argued that “the explosion of recent VPPA cases” is proof “that the disclosures the statute seeks to prevent are a legitimate concern,” despite Patreon’s arguments that the statute does “nothing to materially or directly advance the privacy interests it supposedly was enacted to protect.”

Patreon’s attack on the VPPA

Patreon has argued in a recent court filing that the VPPA was not enacted to protect average video viewers from embarrassing and unwarranted disclosures but “for the express purpose of silencing disclosures about political figures and their video-watching, an issue of undisputed continuing public interest and concern.”

That’s one of many ways that the VPPA silences speech, Patreon argued, by allegedly preventing disclosures regarding public figures that are relevant to public interest.

Among other “fatal flaws,” Patreon alleged, the VPPA “restrains speech” while “doing little if anything to protect privacy” and never protecting privacy “by the least restrictive means.”

Patreon claimed that the VPPA is too narrow, focusing only on pre-recorded videos. It prevents video service providers from disclosing to any other person the titles of videos that someone watched, but it does not necessarily stop platforms from sharing information about “the genres, performers, directors, political views, sexual content, and every other detail of pre-recorded video that those consumers watch,” Patreon claimed.

Patreon: Blocking platforms from sharing user video data is unconstitutional Read More »

meta-relents-to-eu,-allows-unlinking-of-facebook-and-instagram-accounts

Meta relents to EU, allows unlinking of Facebook and Instagram accounts

Meta relents to EU, allows unlinking of Facebook and Instagram accounts

Meta will allow some Facebook and Instagram users to unlink their accounts as part of the platform’s efforts to comply with the European Union’s Digital Markets Act (DMA) ahead of enforcement starting March 1.

In a blog, Meta’s competition and regulatory director, Tim Lamb, wrote that Instagram and Facebook users in the EU, the European Economic Area, and Switzerland would be notified in the “next few weeks” about “more choices about how they can use” Meta’s services and features, including new opportunities to limit data-sharing across apps and services.

Most significantly, users can choose to either keep their accounts linked or “manage their Instagram and Facebook accounts separately so that their information is no longer used across accounts.” Up to this point, linking user accounts had provided Meta with more data to more effectively target ads to more users. The perk of accessing data on Instagram’s widening younger user base, TechCrunch noted, was arguably the $1 billion selling point explaining why Facebook acquired Instagram in 2012.

Also announced today, users protected by the DMA will soon be able to separate their Facebook Messenger, Marketplace, and Gaming accounts. However, doing so will limit some social features available in some of the standalone apps.

While Messenger users choosing to disconnect the chat service from their Facebook accounts will still “be able to use Messenger’s core service offering such as private messaging and chat, voice and video calling,” Marketplace users making that same choice will have to email sellers and buyers, rather than using Facebook’s messenger service. And unlinked Gaming app users will only be able to play single-player games, severing their access to social gaming otherwise supported by linking the Gaming service to their Facebook social networks.

While Meta may have had choices other than depriving users unlinking accounts of some features, Meta didn’t really have a choice in allowing newly announced options to unlink accounts. The DMA specifically requires that very large platforms designated as “gatekeepers” give users the “specific choice” of opting out of sharing personal data across a platform’s different core services or across any separate services that the gatekeepers manage.

Without gaining “specific” consent, gatekeepers will no longer be allowed to “combine personal data from the relevant core platform service with personal data from any further core platform services” or “cross-use personal data from the relevant core platform service in other services provided separately by the gatekeeper,” the DMA says. The “specific” requirement is designed to block platforms from securing consent at sign-up, then hoovering up as much personal data as possible as new services are added in an endless pursuit of advertising growth.

As defined under the General Data Protection Regulation, the EU requiring “specific” consent stops platforms from gaining user consent for broadly defined data processing by instead establishing “the need for granularity,” so that platforms always seek consent for each “specific” data “processing purpose.”

“This is an important ‘safeguard against the gradual widening or blurring of purposes for which data is processed, after a data subject has agreed to the initial collection of the data,’” the European Data Protection Supervisor explained in public comments describing “commercial surveillance and data security practices that harm consumers” provided at the request of the FTC in 2022.

According to Meta’s help page, once users opt out of sharing data between apps and services, Meta will “stop combining your info across these accounts” within 15 days “after you’ve removed them.” However, all “previously combined info would remain combined.”

Meta relents to EU, allows unlinking of Facebook and Instagram accounts Read More »

climate-denialists-find-new-ways-to-monetize-disinformation-on-youtube

Climate denialists find new ways to monetize disinformation on YouTube

Climate denialists find new ways to monetize disinformation on YouTube

Content creators have spent the past five years developing new tactics to evade YouTube’s policies blocking monetization of videos making false claims about climate change, a report from a nonprofit advocacy group, the Center for Countering Digital Hate (CCDH), warned Tuesday.

What the CCDH found is that content creators who could no longer monetize videos spreading “old” forms of climate denial—including claims that “global warming is not happening” or “human-generated greenhouse gasses are not causing global warming”—have moved on.

Now they’re increasingly pushing other claims that contradict climate science, which YouTube has not yet banned and may not ever ban. These include harmful claims that “impacts of global warming are beneficial or harmless,” “climate solutions won’t work,” and “climate science and the climate movement are unreliable.”

The CCDH uncovered these new climate-denial tactics by using artificial intelligence to scan transcripts of 12,058 videos posted on 96 YouTube channels that the CCDH found had previously posted climate-denial content. Verified by researchers, the AI model used was judged accurate in labeling climate-denial content approximately 78 percent of the time.

According to the CCDH’s analysis, the amount of content disputing climate solutions, climate science, and impacts of climate change today comprises 70 percent of climate-denial content—a percent that doubled from 2018 to 2023. At the same time, the amount of content pushing old climate-denial claims that are harder or impossible to monetize fell from 65 percent in 2018 to 30 percent in 2023.

These “new forms of climate denial,” the CCDH warned, are designed to delay climate action by spreading disinformation.

“A new front has opened up in this battle,” Imran Ahmed, the CCDH’s chief executive, said on a call with reporters, according to Reuters. “The people that we’ve been looking at, they’ve gone from saying climate change isn’t happening to now saying, ‘Hey, climate change is happening, but there is no hope. There are no solutions.'”

Since 2018—based on “estimates of typical ad pricing on YouTube” by social media analytics tool Social Blade—YouTube may have profited by as much as $13.4 million annually from videos flagged by the CCDH. And YouTube confirmed that some of these videos featured climate denialism that YouTube already explicitly bans.

In response to the CCDH’s report, YouTube de-monetized some videos found to be in violation of its climate change policy. But a spokesperson confirmed to Ars that the majority of videos that the CCDH found were considered compliant with YouTube’s ad policies.

The fact that most of these videos remain compliant is precisely why the CCDH is calling on YouTube to update its policies, though.

Currently, YouTube’s policy prohibits monetization of content “that contradicts well-established scientific consensus around the existence and causes of climate change.”

“Our climate change policy prohibits ads from running on content that contradicts well-established scientific consensus around the existence and causes of climate change,” YouTube’s spokesperson told Ars. “Debate or discussions of climate change topics, including around public policy or research, is allowed. However, when content crosses the line to climate change denial, we stop showing ads on those videos. We also display information panels under relevant videos to provide additional information on climate change and context from third parties.”

The CCDH worries that YouTube standing by its current policy is too short-sighted. The group recommended tweaking the policy to instead specify that YouTube prohibits content “that contradicts the authoritative scientific consensus on the causes, impacts, and solutions to climate change.”

If YouTube and other social media platforms don’t acknowledge new forms of climate denial and “urgently” update their disinformation policies in response, these new attacks on climate change science “will only increase,” the CCDH warned.

“It is vital that those advocating for action to avert climate disaster take note of this substantial shift from denial of anthropogenic climate change to undermining trust in both solutions and science itself, and shift our focus, our resources and our counternarratives accordingly,” the CCDH’s report said, adding that “demonetizing climate-denial” content “removes the economic incentives underpinning its creation and protects advertisers from bankrolling harmful content.”

Climate denialists find new ways to monetize disinformation on YouTube Read More »