Online Privacy

shopping-app-temu-is-“dangerous-malware,”-spying-on-your-texts,-lawsuit-claims

Shopping app Temu is “dangerous malware,” spying on your texts, lawsuit claims

“Cleverly hidden spyware” —

Temu “surprised” by the lawsuit, plans to “vigorously defend” itself.

A person is holding a package from Temu.

Enlarge / A person is holding a package from Temu.

Temu—the Chinese shopping app that has rapidly grown so popular in the US that even Amazon is reportedly trying to copy it—is “dangerous malware” that’s secretly monetizing a broad swath of unauthorized user data, Arkansas Attorney General Tim Griffin alleged in a lawsuit filed Tuesday.

Griffin cited research and media reports exposing Temu’s allegedly nefarious design, which “purposely” allows Temu to “gain unrestricted access to a user’s phone operating system, including, but not limited to, a user’s camera, specific location, contacts, text messages, documents, and other applications.”

“Temu is designed to make this expansive access undetected, even by sophisticated users,” Griffin’s complaint said. “Once installed, Temu can recompile itself and change properties, including overriding the data privacy settings users believe they have in place.”

Griffin fears that Temu is capable of accessing virtually all data on a person’s phone, exposing both users and non-users to extreme privacy and security risks. It appears that anyone texting or emailing someone with the shopping app installed risks Temu accessing private data, Griffin’s suit claimed, which Temu then allegedly monetizes by selling it to third parties, “profiting at the direct expense” of users’ privacy rights.

“Compounding” risks is the possibility that Temu’s Chinese owners, PDD Holdings, are legally obligated to share data with the Chinese government, the lawsuit said, due to Chinese “laws that mandate secret cooperation with China’s intelligence apparatus regardless of any data protection guarantees existing in the United States.”

Griffin’s suit cited an extensive forensic investigation into Temu by Grizzly Research—which analyzes publicly traded companies to inform investors—last September. In their report, Grizzly Research alleged that PDD Holdings is a “fraudulent company” and that “Temu is cleverly hidden spyware that poses an urgent security threat to United States national interests.”

As Griffin sees it, Temu baits users with misleading promises of discounted, quality goods, angling to get access to as much user data as possible by adding addictive features that keep users logged in, like spinning a wheel for deals. Meanwhile hundreds of complaints to the Better Business Bureau showed that Temu’s goods are actually low-quality, Griffin alleged, apparently supporting his claim that Temu’s end goal isn’t to be the world’s biggest shopping platform but to steal data.

Investigators agreed, the lawsuit said, concluding “we strongly suspect that Temu is already, or intends to, illegally sell stolen data from Western country customers to sustain a business model that is otherwise doomed for failure.”

Seeking an injunction to stop Temu from allegedly spying on users, Griffin is hoping a jury will find that Temu’s alleged practices violated the Arkansas Deceptive Trade Practices Act (ADTPA) and the Arkansas Personal Information Protection Act. If Temu loses, it could be on the hook for $10,000 per violation of the ADTPA and ordered to disgorge profits from data sales and deceptive sales on the app.

Temu “surprised” by lawsuit

The company that owns Temu, PDD Holdings, was founded in 2015 by a former Google employee, Colin Huang. It was originally based in China, but after security concerns were raised, the company relocated its “principal executive offices” to Ireland, Griffin’s complaint said. This, Griffin suggested, was intended to distance the company from debate over national security risks posed by China, but because the majority of its business operations remain in China, risks allegedly remain.

PDD Holdings’ relocation came amid heightened scrutiny of Pinduoduo, the Chinese app on which Temu’s shopping platform is based. Last year, Pinduoduo came under fire for privacy and security risks that got the app suspended from Google Play as suspected malware. Experts said Pinduoduo took security and privacy risks “to the next level,” the lawsuit said. And “around the same time,” Apple’s App Store also flagged Temu’s data privacy terms as misleading, further heightening scrutiny of two of PDD Holdings’ biggest apps, the complaint noted.

Researchers found that Pinduoduo “was programmed to bypass users’ cell phone security in order to monitor activities on other apps, check notifications, read private messages, and change settings,” the lawsuit said. “It also could spy on competitors by tracking activity on other shopping apps and getting information from them,” as well as “run in the background and prevent itself from being uninstalled.” The motivation behind the malicious design was apparently “to boost sales.”

According to Griffin, the same concerns that got Pinduoduo suspended last year remain today for Temu users, but the App Store and Google Play have allegedly failed to take action to prevent unauthorized access to user data. Within a year of Temu’s launch, the “same software engineers and product managers who developed Pinduoduo” allegedly “were transitioned to working on the Temu app.”

Google and Apple did not immediately respond to Ars’ request for comment.

A Temu spokesperson provided a statement to Ars, discrediting Grizzly Research’s investigation and confirming that the company was “surprised and disappointed by the Arkansas Attorney General’s Office for filing the lawsuit without any independent fact-finding.”

“The allegations in the lawsuit are based on misinformation circulated online, primarily from a short-seller, and are totally unfounded,” Temu’s spokesperson said. “We categorically deny the allegations and will vigorously defend ourselves.”

While Temu plans to defend against claims, the company also seems to potentially be open to making changes based on criticism lobbed in Griffin’s complaint.

“We understand that as a new company with an innovative supply chain model, some may misunderstand us at first glance and not welcome us,” Temu’s spokesperson said. “We are committed to the long-term and believe that scrutiny will ultimately benefit our development. We are confident that our actions and contributions to the community will speak for themselves over time.”

Shopping app Temu is “dangerous malware,” spying on your texts, lawsuit claims Read More »

meta-halts-plans-to-train-ai-on-facebook,-instagram-posts-in-eu

Meta halts plans to train AI on Facebook, Instagram posts in EU

Not so fast —

Meta was going to start training AI on Facebook and Instagram posts on June 26.

Meta halts plans to train AI on Facebook, Instagram posts in EU

Meta has apparently paused plans to process mounds of user data to bring new AI experiences to Europe.

The decision comes after data regulators rebuffed the tech giant’s claims that it had “legitimate interests” in processing European Union- and European Economic Area (EEA)-based Facebook and Instagram users’ data—including personal posts and pictures—to train future AI tools.

There’s not much information available yet on Meta’s decision. But Meta’s EU regulator, the Irish Data Protection Commission (DPC), posted a statement confirming that Meta made the move after ongoing discussions with the DPC about compliance with the EU’s strict data privacy laws, including the General Data Protection Regulation (GDPR).

“The DPC welcomes the decision by Meta to pause its plans to train its large language model using public content shared by adults on Facebook and Instagram across the EU/EEA,” the DPC said. “This decision followed intensive engagement between the DPC and Meta. The DPC, in co-operation with its fellow EU data protection authorities, will continue to engage with Meta on this issue.”

The European Center for Digital Rights, known as Noyb, had filed 11 complaints across the EU and intended to file more to stop Meta from moving forward with its AI plans. The DPC initially gave Meta AI the green light to proceed but has now made a U-turn, Noyb said.

Meta’s policy still requires update

In a blog, Meta had previously teased new AI features coming to the EU, including everything from customized stickers for chats and stories to Meta AI, a “virtual assistant you can access to answer questions, generate images, and more.” Meta had argued that training on EU users’ personal data was necessary so that AI services could reflect “the diverse cultures and languages of the European communities who will use them.”

Before the pause, the company had been hoping to rely “on the legal basis of ‘legitimate interests’” to process the data, because it’s needed “to improve AI at Meta.” But Noyb and EU data regulators had argued that Meta’s legal basis did not comply with the GDPR, with the Norwegian Data Protection Authority arguing that “the most natural thing would have been to ask the users for their consent before their posts and images are used in this way.”

Rather than ask for consent, however, Meta had given EU users until June 26 to opt out. Noyb had alleged that in going this route, Meta planned to use “dark patterns” to thwart AI opt-outs in the EU and collect as much data as possible to fuel undisclosed AI technologies. Noyb urgently argued that once users’ data is in the system, “users seem to have no option of ever having it removed.”

Noyb said that the “obvious explanation” for Meta seemingly halting its plans was pushback from EU officials, but the privacy advocacy group also warned EU users that Meta’s privacy policy has not yet been fully updated to reflect the pause.

“We welcome this development but will monitor this closely,” Max Schrems, Noyb chair, said in a statement provided to Ars. “So far there is no official change of the Meta privacy policy, which would make this commitment legally binding. The cases we filed are ongoing and will need a determination.”

Ars was not immediately able to reach Meta for comment.

Meta halts plans to train AI on Facebook, Instagram posts in EU Read More »

facebook,-instagram-may-cut-fees-by-nearly-50%-in-scramble-for-dma-compliance

Facebook, Instagram may cut fees by nearly 50% in scramble for DMA compliance

Facebook, Instagram may cut fees by nearly 50% in scramble for DMA compliance

Meta is considering cutting monthly subscription fees for Facebook and Instagram users in the European Union nearly in half to comply with the Digital Market Act (DMA), Reuters reported.

During a day-long public workshop on Meta’s DMA compliance, Meta’s competition and regulatory director, Tim Lamb, told the European Commission (EC) that individual subscriber fees could be slashed from 9.99 euros to 5.99 euros. Meta is hoping that reducing fees will help to speed up the EC’s process for resolving Meta’s compliance issues. If Meta’s offer is accepted, any additional accounts would then cost 4 euros instead of 6 euros.

Lamb said that these prices are “by far the lowest end of the range that any reasonable person should be paying for services of these quality,” calling it a “serious offer.”

The DMA requires that Meta’s users of Facebook, Instagram, Facebook Messenger, and Facebook Marketplace “freely” give consent to share data used for ad targeting without losing access to the platform if they’d prefer not to share data. That means services must provide an acceptable alternative for users who don’t consent to data sharing.

“Gatekeepers should enable end users to freely choose to opt-in to such data processing and sign-in practices by offering a less personalized but equivalent alternative, and without making the use of the core platform service or certain functionalities thereof conditional upon the end user’s consent,” the DMA says.

Designated gatekeepers like Meta have debated what it means for a user to “freely” give consent, suggesting that offering a paid subscription for users who decline to share data would be one route for Meta to continue offering high-quality services without routinely hoovering up data on all its users.

But EU privacy advocates like NOYB have protested Meta’s plan to offer a subscription model instead of consenting to data sharing, calling it a “pay or OK model” that forces Meta users who cannot pay the fee to consent to invasive data sharing they would otherwise decline. In a statement shared with Ars, NOYB chair Max Schrems said that even if Meta reduced its fees to 1.99 euros, it would be forcing consent from 99.9 percent of users.

“We know from all research that even a fee of just 1.99 euros or less leads to a shift in consent from 3–10 percent that genuinely want advertisement to 99.9 percent that still click yes,” Schrems said.

In the EU, the General Data Protection Regulation (GDPR) “requires that consent must be ‘freely’ given,” Schrems said. “In reality, it is not about the amount of money—it is about the ‘pay or OK’ approach as a whole. The entire purpose of ‘pay or OK’, is to get users to click on OK, even if this is not their free and genuine choice. We do not think the mere change of the amount makes this approach legal.”

Where EU stands on subscription models

Meta expects that a subscription model is a legal alternative under the DMA. The tech giant said it was launching EU subscriptions last November after the Court of Justice of the European Union (CJEU) “endorsed the subscriptions model as a way for people to consent to data processing for personalized advertising.”

It’s unclear how popular the subscriptions have been at the current higher cost. Right now in the EU, monthly Facebook and Instagram subscriptions cost 9.99 euros per month on the web or 12.99 euros per month on iOS and Android, with additional fees of 6 euros per month on the web and 8 euros per month on iOS and Android for each additional account. Meta declined to comment on how many EU users have subscribed, noting to Ars that it has no obligation to do so.

In the CJEU case, the court was reviewing Meta’s GDPR compliance, which Schrems noted is less strict than the DMA. The CJEU specifically said that under the GDPR, “users must be free to refuse individually”—”in the context of” signing up for services— “to give their consent to particular data processing operations not necessary” for Meta to provide such services “without being obliged to refrain entirely from using the service.”

Facebook, Instagram may cut fees by nearly 50% in scramble for DMA compliance Read More »

eu-accuses-tiktok-of-failing-to-stop-kids-pretending-to-be-adults

EU accuses TikTok of failing to stop kids pretending to be adults

Getting TikTok’s priorities straight —

TikTok becomes the second platform suspected of Digital Services Act breaches.

EU accuses TikTok of failing to stop kids pretending to be adults

The European Commission (EC) is concerned that TikTok isn’t doing enough to protect kids, alleging that the short-video app may be sending kids down rabbit holes of harmful content while making it easy for kids to pretend to be adults and avoid the protective content filters that do exist.

The allegations came Monday when the EC announced a formal investigation into how TikTok may be breaching the Digital Services Act (DSA) “in areas linked to the protection of minors, advertising transparency, data access for researchers, as well as the risk management of addictive design and harmful content.”

“We must spare no effort to protect our children,” Thierry Breton, European Commissioner for Internal Market, said in the press release, reiterating that the “protection of minors is a top enforcement priority for the DSA.”

This makes TikTok the second platform investigated for possible DSA breaches after X (aka Twitter) came under fire last December. Both are being scrutinized after submitting transparency reports in September that the EC said failed to satisfy the DSA’s strict standards on predictable things like not providing enough advertising transparency or data access for researchers.

But while X is additionally being investigated over alleged dark patterns and disinformation—following accusations last October that X wasn’t stopping the spread of Israel/Hamas disinformation—it’s TikTok’s young user base that appears to be the focus of the EC’s probe into its platform.

“As a platform that reaches millions of children and teenagers, TikTok must fully comply with the DSA and has a particular role to play in the protection of minors online,” Breton said. “We are launching this formal infringement proceeding today to ensure that proportionate action is taken to protect the physical and emotional well-being of young Europeans.”

Likely over the coming months, the EC will request more information from TikTok, picking apart its DSA transparency report. The probe could require interviews with TikTok staff or inspections of TikTok’s offices.

Upon concluding its investigation, the EC could require TikTok to take interim measures to fix any issues that are flagged. The Commission could also make a decision regarding non-compliance, potentially subjecting TikTok to fines of up to 6 percent of its global turnover.

An EC press officer, Thomas Regnier, told Ars that the Commission suspected that TikTok “has not diligently conducted” risk assessments to properly maintain mitigation efforts protecting “the physical and mental well-being of their users, and the rights of the child.”

In particular, its algorithm may risk “stimulating addictive behavior,” and its recommender systems “might drag its users, in particular minors and vulnerable users, into a so-called ‘rabbit hole’ of repetitive harmful content,” Regnier told Ars. Further, TikTok’s age verification system may be subpar, with the EU alleging that TikTok perhaps “failed to diligently assess the risk of 13-17-year-olds pretending to be adults when accessing TikTok,” Regnier said.

To better protect TikTok’s young users, the EU’s investigation could force TikTok to update its age-verification system and overhaul its default privacy, safety, and security settings for minors.

“In particular, the Commission suspects that the default settings of TikTok’s recommender systems do not ensure a high level of privacy, security, and safety of minors,” Regnier said. “The Commission also suspects that the default privacy settings that TikTok has for 16-17-year-olds are not the highest by default, which would not be compliant with the DSA, and that push notifications are, by default, not switched off for minors, which could negatively impact children’s safety.”

TikTok could avoid steep fines by committing to remedies recommended by the EC at the conclusion of its investigation.

Regnier told Ars that the EC does not comment on ongoing investigations, but its probe into X has spanned three months so far. Because the DSA does not provide any deadlines that may speed up these kinds of enforcement proceedings, ultimately, the duration of both investigations will depend on how much “the company concerned cooperates,” the EU’s press release said.

A TikTok spokesperson told Ars that TikTok “would continue to work with experts and the industry to keep young people on its platform safe,” confirming that the company “looked forward to explaining this work in detail to the European Commission.”

“TikTok has pioneered features and settings to protect teens and keep under-13s off the platform, issues the whole industry is grappling with,” TikTok’s spokesperson said.

All online platforms are now required to comply with the DSA, but enforcement on TikTok began near the end of July 2023. A TikTok press release last August promised that the platform would be “embracing” the DSA. But in its transparency report, submitted the next month, TikTok acknowledged that the report only covered “one month of metrics” and may not satisfy DSA standards.

“We still have more work to do,” TikTok’s report said, promising that “we are working hard to address these points ahead of our next DSA transparency report.”

EU accuses TikTok of failing to stop kids pretending to be adults Read More »

amc-to-pay-$8m-for-allegedly-violating-1988-law-with-use-of-meta-pixel

AMC to pay $8M for allegedly violating 1988 law with use of Meta Pixel

Stream like no one is watching —

Proposed settlement impacts millions using AMC apps like Shudder and AMC+.

AMC to pay $8M for allegedly violating 1988 law with use of Meta Pixel

On Thursday, AMC notified subscribers of a proposed $8.3 million settlement that provides awards to an estimated 6 million subscribers of its six streaming services: AMC+, Shudder, Acorn TV, ALLBLK, SundanceNow, and HIDIVE.

The settlement comes in response to allegations that AMC illegally shared subscribers’ viewing history with tech companies like Google, Facebook, and X (aka Twitter) in violation of the Video Privacy Protection Act (VPPA).

Passed in 1988, the VPPA prohibits AMC and other video service providers from sharing “information which identifies a person as having requested or obtained specific video materials or services from a video tape service provider.” It was originally passed to protect individuals’ right to private viewing habits, after a journalist published the mostly unrevealing video rental history of a judge, Robert Bork, who had been nominated to the Supreme Court by Ronald Reagan.

The so-called “Bork Tapes” revealed little—other than that the judge frequently rented spy thrillers and British costume dramas—but lawmakers recognized that speech could be chilled by monitoring anyone’s viewing habits. While the law was born in the era of Blockbuster Video, subscribers suing AMC wrote in their amended complaint that “the importance of legislation like the VPPA in the modern era of datamining is more pronounced than ever before.”

According to subscribers suing, AMC allegedly installed tracking technologies—including the Meta Pixel, the X Tracking Pixel, and Google Tracking Technology—on its website, allowing their personally identifying information to be connected with their viewing history.

Some trackers, like the Meta Pixel, required AMC to choose what kind of activity can be tracked, and subscribers claimed that AMC had willingly opted into sharing video names and URLs with Meta, along with a Facebook ID. “Anyone” could use the Facebook ID, subscribers said, to identify the AMC subscribers “simply by entering https://www.facebook.com/[unencrypted FID]/” into a browser.

X’s ID could similarly be de-anonymized, subscribers alleged, by using tweeterid.com.

AMC “could easily program its AMC Services websites so that this information is not disclosed” to tech companies, subscribers alleged.

Denying wrongdoing, AMC has defended its use of tracking technologies but is proposing to settle with subscribers to avoid uncertain outcomes from litigation, the proposed settlement said.

A hearing to approve the proposed settlement has been scheduled for May 16.

If it’s approved, AMC has agreed to “suspend, remove, or modify operation of the Meta Pixel and other Third-Party Tracking Technologies so that use of such technologies on AMC Services will not result in AMC’s disclosure to the third-party technology companies of the specific video content requested or obtained by a specific individual.”

Google and X did not immediately respond to Ars’ request to comment. Meta declined to comment.

All registered users of AMC services who “requested or obtained video content on at least one of the six AMC services” between January 18, 2021, and January 10, 2024, are currently eligible to submit claims under the proposed settlement. The deadline to submit is April 9.

In addition to distributing the $8.3 million settlement fund among class members, subscribers will receive a free one-week digital subscription.

According to AMC’s notice to subscribers (full disclosure, I am one), AMC’s agreement to avoid sharing subscribers’ viewing histories may change if the VPPA is amended, repealed, or invalidated. If the law changes to permit sharing viewing data at the core of subscribers’ claim, AMC may resume sharing that information with tech companies.

That day could come soon if Patreon has its way. Recently, Patreon asked a federal judge to rule that the VPPA is unconstitutional.

Patreon’s lawsuit is similar in its use of the Meta Pixel, allegedly violating the VPPA by sharing video views on its platform with Meta.

Patreon has argued that the VPPA is unconstitutional because it chills speech. Patreon said that the law was enacted “for the express purpose of silencing disclosures about political figures and their video-watching, an issue of undisputed continuing public interest and concern.”

According to Patreon, the VPPA narrowly prohibits video service providers from sharing video titles, but not from sharing information that people may wish to keep private, such as “the genres, performers, directors, political views, sexual content, and every other detail of pre-recorded video that those consumers watch.”

Therefore, Patreon argued, the VPPA “restrains speech” while “doing little if anything to protect privacy” and never protecting privacy “by the least restrictive means.”

That lawsuit remains ongoing, but Patreon’s position is likely to be met with opposition from experts who typically also defend freedom of speech. Experts at the Electronic Privacy Information Center, like AMC subscribers suing, consider the VPPA one of America’s “strongest protections of consumer privacy against a specific form of data collection.” And the Electronic Frontier Foundation (EFF) has already moved to convince the court to reject Patreon’s claim, describing the VPPA in a blog as an “essential” privacy protection.

“EFF is second to none in fighting for everyone’s First Amendment rights in court,” EFF’s blog said. “But Patreon’s First Amendment argument is wrong and misguided. The company seeks to elevate its speech interests over those of Internet users who benefit from the VPPA’s protections.”

AMC to pay $8M for allegedly violating 1988 law with use of Meta Pixel Read More »

backdoors-that-let-cops-decrypt-messages-violate-human-rights,-eu-court-says

Backdoors that let cops decrypt messages violate human rights, EU court says

Building of the European Court of Human Rights in Strasbourg (France).

Enlarge / Building of the European Court of Human Rights in Strasbourg (France).

The European Court of Human Rights (ECHR) has ruled that weakening end-to-end encryption disproportionately risks undermining human rights. The international court’s decision could potentially disrupt the European Commission’s proposed plans to require email and messaging service providers to create backdoors that would allow law enforcement to easily decrypt users’ messages.

This ruling came after Russia’s intelligence agency, the Federal Security Service (FSS), began requiring Telegram to share users’ encrypted messages to deter “terrorism-related activities” in 2017, ECHR’s ruling said. A Russian Telegram user alleged that FSS’s requirement violated his rights to a private life and private communications, as well as all Telegram users’ rights.

The Telegram user was apparently disturbed, moving to block required disclosures after Telegram refused to comply with an FSS order to decrypt messages on six users suspected of terrorism. According to Telegram, “it was technically impossible to provide the authorities with encryption keys associated with specific users,” and therefore, “any disclosure of encryption keys” would affect the “privacy of the correspondence of all Telegram users,” the ECHR’s ruling said.

For refusing to comply, Telegram was fined, and one court even ordered the app to be blocked in Russia, while dozens of Telegram users rallied to continue challenging the order to maintain Telegram services in Russia. Ultimately, users’ multiple court challenges failed, sending the case before the ECHR while Telegram services seemingly tenuously remained available in Russia.

The Russian government told the ECHR that “allegations that the security services had access to the communications of all users” were “unsubstantiated” because their request only concerned six Telegram users.

They further argued that Telegram providing encryption keys to FSB “did not mean that the information necessary to decrypt encrypted electronic communications would become available to its entire staff.” Essentially, the government believed that FSB staff’s “duty of discretion” would prevent any intrusion on private life for Telegram users as described in the ECHR complaint.

Seemingly most critically, the government told the ECHR that any intrusion on private lives resulting from decrypting messages was “necessary” to combat terrorism in a democratic society. To back up this claim, the government pointed to a 2017 terrorist attack that was “coordinated from abroad through secret chats via Telegram.” The government claimed that a second terrorist attack that year was prevented after the government discovered it was being coordinated through Telegram chats.

However, privacy advocates backed up Telegram’s claims that the messaging services couldn’t technically build a backdoor for governments without impacting all its users. They also argued that the threat of mass surveillance could be enough to infringe on human rights. The European Information Society Institute (EISI) and Privacy International told the ECHR that even if governments never used required disclosures to mass surveil citizens, it could have a chilling effect on users’ speech or prompt service providers to issue radical software updates weakening encryption for all users.

In the end, the ECHR concluded that the Telegram user’s rights had been violated, partly due to privacy advocates and international reports that corroborated Telegram’s position that complying with the FSB’s disclosure order would force changes impacting all its users.

The “confidentiality of communications is an essential element of the right to respect for private life and correspondence,” the ECHR’s ruling said. Thus, requiring messages to be decrypted by law enforcement “cannot be regarded as necessary in a democratic society.”

Martin Husovec, a law professor who helped to draft EISI’s testimony, told Ars that EISI is “obviously pleased that the Court has recognized the value of encryption and agreed with us that state-imposed weakening of encryption is a form of indiscriminate surveillance because it affects everyone’s privacy.”

Backdoors that let cops decrypt messages violate human rights, EU court says Read More »

some-calif.-cops-still-sharing-license-plate-info-with-anti-abortion-states

Some Calif. cops still sharing license plate info with anti-abortion states

Some Calif. cops still sharing license plate info with anti-abortion states

Dozens of California police agencies are still sharing automated license plate reader (ALPR) data with out-of-state authorities without a warrant, the Electronic Frontier Foundation has revealed. This is occurring despite guidance issued by State Attorney General Rob Bonta last year.

Clarifying a state law that limits state public agencies to sharing ALPR data only with other public agencies, Bonta’s guidance pointed out that “importantly,” the law’s definition of “public agency” “does not include out-of-state or federal law enforcement agencies.”

Bonta’s guidance came after EFF uncovered more than 70 California law enforcement agencies sharing ALPR data with cops in other states, including anti-abortion states. After Bonta clarified the statute, approximately half of these agencies told EFF that they updated their practices to fall in line with Bonta’s reading of the law. Some states could not verify that the practice had ended yet, though.

In a letter to Bonta, EFF praised the guidance as protecting Californians’ privacy but also flagged more than 30 police agencies that either expressly rejected Bonta’s guidance or else refused to confirm that they’ve stopped sharing data with out-of-state authorities. EFF staff attorney Jennifer Pinsof told Ars that it’s likely that additional agencies are also failing to comply, such as agencies that EFF never contacted or that recently acquired ALPR technology.

“We think it is very likely other agencies in the state remain out of compliance with the law,” EFF’s letter said.

EFF is hoping that making Bonta aware of the ongoing non-compliance will end sharing of highly sensitive location data with police agencies in states that do not provide as many privacy protections as California does. If Bonta “takes initiative” to enforce compliance, Pinsof said that police may be more willing to consider the privacy risks involved, since Bonta can “communicate more easily with the law enforcement community” than privacy advocates can.

However, even Bonta may struggle, as some agencies “have dug their heels in,” Pinsof said.

Many state police agencies simply do not agree with Bonta’s interpretation of the law, which they claim does allow sharing ALPR data with cops in other states. In a November letter, a lawyer representing the California State Sheriffs’ Association, California Police Chiefs Association, and California Peace Officers’ Association urged Bonta to “revisit” his position that the law “precludes sharing ALPR data with out-of-state governmental entities for legitimate law enforcement purposes.”

The cops argued that sharing ALPR data with cops in other states assists “in the apprehension and prosecution of child abductors, narcotics traffickers, human traffickers, extremist hate groups, and other cross-state criminal enterprises.”

They told Bonta that the law “was not designed to block law enforcement from sharing ALPR data outside California where the information could be used to intercede with criminal offenders moving from state to state.” As they see it, cooperation between state authorities is “absolutely imperative to effective policing.”

Here’s where cops say the ambiguity lies. The law defines public agency as “the state, any city, county, or city and county, or any agency or political subdivision of the state or a city, county, or city and county, including, but not limited to, a law enforcement agency.” According to cops, because the law does not “specifically refer to the State of California” or “this state,” it could be referring to agencies in any state.

“Had the legislation referred to ‘a State’ rather than ‘the State,’ there would be no debate about whether sharing was prohibited,” the police associations’ letter said. “We see no basis to read such a limitation into the legislation based on the word ‘the’ rather than ‘a.'”

The police associations also reminded Bonta that the California Legislature considered passing a bill that would have explicitly “prohibited the out-of-state sharing of ALPR information” with states interfering with “the right to seek abortion services” but “rejected it.” They told Bonta that “the Legislature’s refusal to adopt a position consistent with the position” he is “advancing is troubling.”

EFF said that California police can still share ALPR data with out-of-state police in situations permitted by law, like when out-of-state cops have a “warrant for ALPR information based on probable cause and particularity.” Instead, EFF alleged that cops are “dragnet sharing through commercial cloud storage systems” without warrants, which could be violating Californians’ right to privacy, as well as First and Fourth Amendment rights.

EFF urged Bonta to reject the police associations’ “crabbed interpretation” of the statute, but it’s unclear if Bonta will ever respond. Pinsof told Ars that Bonta did not directly respond to EFF’s initial investigation, but the guidance he later issued seemed to suggest that he got EFF’s message.

Police associations and Bonta’s office did not respond to Ars’ request to comment.

Some Calif. cops still sharing license plate info with anti-abortion states Read More »

data-broker-allegedly-selling-de-anonymized-info-to-face-ftc-lawsuit-after-all

Data broker allegedly selling de-anonymized info to face FTC lawsuit after all

Data broker allegedly selling de-anonymized info to face FTC lawsuit after all

The Federal Trade Commission has succeeded in keeping alive its first federal court case against a geolocation data broker that’s allegedly unfairly selling large quantities of data in violation of the FTC Act.

On Saturday, US District Judge Lynn Winmill denied Kochava’s motion to dismiss an amended FTC complaint, which he said plausibly argued that “Kochava’s data sales invade consumers’ privacy and expose them to risks of secondary harms by third parties.”

Winmill’s ruling reversed a dismissal of the FTC’s initial complaint, which the court previously said failed to adequately allege that Kochava’s data sales cause or are likely to cause a “substantial” injury to consumers.

The FTC has accused Kochava of selling “a substantial amount of data obtained from millions of mobile devices across the world”—allegedly combining precise geolocation data with a “staggering amount of sensitive and identifying information” without users’ knowledge or informed consent. This data, the FTC alleged, “is not anonymized and is linked or easily linkable to individual consumers” without mining “other sources of data.”

Kochava’s data sales allegedly allow its customers—whom the FTC noted often pay tens of thousands of dollars monthly—to target specific individuals by combining Kochava data sets. Using just Kochava data, marketers can create “highly granular” portraits of ad targets such as “a woman who visits a particular building, the woman’s name, email address, and home address, and whether the woman is African-American, a parent (and if so, how many children), or has an app identifying symptoms of cancer on her phone.” Just one of Kochava’s databases “contains ‘comprehensive profiles of individual consumers,’ with up to ‘300 data points’ for ‘over 300 million unique individuals,'” the FTC reported.

This harms consumers, the FTC alleged, in “two distinct ways”—by invading their privacy and by causing “an increased risk of suffering secondary harms, such as stigma, discrimination, physical violence, and emotional distress.”

In its amended complaint, the FTC overcame deficiencies in its initial complaint by citing specific examples of consumers already known to have been harmed by brokers sharing sensitive data without their consent. That included a Catholic priest who resigned after he was outed by a group using precise mobile geolocation data to track his personal use of Grindr and his movements to “LGBTQ+-associated locations.” The FTC also pointed to invasive practices by journalists using precise mobile geolocation data to identify and track military and law enforcement officers over time, as well as data brokers tracking “abortion-minded women” who visited reproductive health clinics to target them with ads about abortion and alternatives to abortion.

“Kochava’s practices intrude into the most private areas of consumers’ lives and cause or are likely to cause substantial injury to consumers,” the FTC’s amended complaint said.

The FTC is seeking a permanent injunction to stop Kochava from allegedly selling sensitive data without user consent.

Kochava considers the examples of consumer harms in the FTC’s amended complaint as “anecdotes” disconnected from its own activities. The data broker was seemingly so confident that Winmill would agree to dismiss the FTC’s amended complaint that the company sought sanctions against the FTC for what it construed as a “baseless” filing. According to Kochava, many of the FTC’s allegations were “knowingly false.”

Ultimately, the court found no evidence that the FTC’s complaints were baseless. Instead of dismissing the case and ordering the FTC to pay sanctions, Winmill wrote in his order that Kochava’s motion to dismiss “misses the point” of the FTC’s filing, which was to allege that Kochava’s data sales are “likely” to cause alleged harms. Because the FTC had “significantly” expanded factual allegations, the agency “easily” satisfied the plausibility standard to allege substantial harms were likely, Winmill said.

Kochava CEO and founder Charles Manning said in a statement provided to Ars that Kochava “expected” Winmill’s ruling and is “confident” that Kochava “will prevail on the merits.”

“This case is really about the FTC attempting to make an end-run around Congress to create data privacy law,” Manning said. “The FTC’s salacious hypotheticals in its amended complaint are mere scare tactics. Kochava has always operated consistently and proactively in compliance with all rules and laws, including those specific to privacy.”

In a press release announcing the FTC lawsuit in 2022, the director of the FTC’s Bureau of Consumer Protection, Samuel Levine, said that the FTC was determined to halt Kochava’s allegedly harmful data sales.

“Where consumers seek out health care, receive counseling, or celebrate their faith is private information that shouldn’t be sold to the highest bidder,” Levine said. “The FTC is taking Kochava to court to protect people’s privacy and halt the sale of their sensitive geolocation information.”

Data broker allegedly selling de-anonymized info to face FTC lawsuit after all Read More »

apple-warns-proposed-uk-law-will-affect-software-updates-around-the-world

Apple warns proposed UK law will affect software updates around the world

Heads up —

Apple may leave the UK if required to provide advance notice of product updates.

Apple warns proposed UK law will affect software updates around the world

Apple is “deeply concerned” that proposed changes to a United Kingdom law could give the UK government unprecedented power to “secretly veto” privacy and security updates to its products and services, the tech giant said in a statement provided to Ars.

If passed, potentially this spring, the amendments to the UK’s Investigatory Powers Act (IPA) could deprive not just UK users, but all users globally of important new privacy and security features, Apple warned.

“Protecting our users’ privacy and the security of their data is at the very heart of everything we do at Apple,” Apple said. “We’re deeply concerned the proposed amendments” to the IPA “now before Parliament place users’ privacy and security at risk.”

The IPA was initially passed in 2016 to ensure that UK officials had lawful access to user data to investigate crimes like child sexual exploitation or terrorism. Proposed amendments were announced last November, after a review showed that the “Act has not been immune to changes in technology over the last six years” and “there is a risk that some of these technological changes have had a negative effect on law enforcement and intelligence services’ capabilities.”

The proposed amendments require that any company that fields government data requests must notify UK officials of any updates they planned to make that could restrict the UK government’s access to this data, including any updates impacting users outside the UK.

UK officials said that this would “help the UK anticipate the risk to public safety posed by the rolling out of technology by multinational companies that precludes lawful access to data. This will reduce the risk of the most serious offenses such as child sexual exploitation and abuse or terrorism going undetected.”

According to the BBC, the House of Lords will begin debating the proposed changes on Tuesday.

Ahead of that debate, Apple described the amendments on Monday as “an unprecedented overreach by the government” that “if enacted” could allow the UK to “attempt to secretly veto new user protections globally, preventing us from ever offering them to customers.”

In a letter last year, Apple argued that “it would be improper for the Home Office to act as the world’s regulator of security technology.”

Apple told the UK Home Office that imposing “secret requirements on providers located in other countries” that apply to users globally “could be used to force a company like Apple, that would never build a backdoor, to publicly withdraw critical security features from the UK market, depriving UK users of these protections.” It could also “dramatically disrupt the global market for security technologies, putting users in the UK and around the world at greater risk,” Apple claimed.

The proposed changes, Apple said, “would suppress innovation, stifle commerce, and—when combined with purported extraterritorial application—make the Home Office the de facto global arbiter of what level of data security and encryption are permissible.”

UK defends proposed changes

The UK Home Office has repeatedly stressed that these changes do not “provide powers for the Secretary of State to approve or refuse technical changes,” but “simply” requires companies “to inform the Secretary of State of relevant changes before those changes are implemented.”

“The intention is not to introduce a consent or veto mechanism or any other kind of barrier to market,” a UK Home Office fact sheet said. “A key driver for this amendment is to give operational partners time to understand the change and adapt their investigative techniques where necessary, which may in some circumstances be all that is required to maintain lawful access.”

The Home Office has also claimed that “these changes do not directly relate to end-to-end encryption,” while admitting that they “are designed to ensure that companies are not able to unilaterally make design changes which compromise exceptional lawful access where the stringent safeguards of the IPA regime are met.”

This seems to suggest that companies will not be allowed to cut off the UK government from accessing encrypted data under certain circumstances, which concerns privacy advocates who consider end-to-end encryption a vital user privacy and security protection. Earlier this month, civil liberties groups including Big Brother Watch, Liberty, Open Rights Group and Privacy International filed a joint brief opposing the proposed changes, the BBC reported, warning that passing the amendments would be “effectively transforming private companies into arms of the surveillance state and eroding the security of devices and the Internet.”

“We have always been clear that we support technological innovation and private and secure communications technologies, including end-to-end encryption, but this cannot come at a cost to public safety,” a UK government official told the BBC.

The UK government may face more opposition to the amendments than from tech companies and privacy advocates, though. In Apple’s letter last year, the tech giant noted that the proposed changes to the IPA could conflict with EU and US laws, including the EU’s General Data Protection Regulation—considered the world’s strongest privacy law.

Under the GDPR, companies must implement measures to safeguard users’ personal data, Apple said, noting that “encryption is one means by which a company can meet” that obligation.

“Secretly installing backdoors in end-to-end encrypted technologies in order to comply with UK law for persons not subject to any lawful process would violate that obligation,” Apple argued.

Apple warns proposed UK law will affect software updates around the world Read More »

nsa-finally-admits-to-spying-on-americans-by-purchasing-sensitive-data

NSA finally admits to spying on Americans by purchasing sensitive data

Leaving Americans in the dark —

Violating Americans’ privacy “not just unethical but illegal,” senator says.

NSA finally admits to spying on Americans by purchasing sensitive data

The National Security Agency (NSA) has admitted to buying records from data brokers detailing which websites and apps Americans use, US Senator Ron Wyden (D-Ore.) revealed Thursday.

This news follows Wyden’s push last year that forced the FBI to admit that it was also buying Americans’ sensitive data. Now, the senator is calling on all intelligence agencies to “stop buying personal data from Americans that has been obtained illegally by data brokers.”

“The US government should not be funding and legitimizing a shady industry whose flagrant violations of Americans’ privacy are not just unethical but illegal,” Wyden said in a letter to Director of National Intelligence (DNI) Avril Haines. “To that end, I request that you adopt a policy that, going forward,” intelligence agencies “may only purchase data about Americans that meets the standard for legal data sales established by the FTC.”

Wyden suggested that the intelligence community might be helping data brokers violate an FTC order requiring that Americans are provided “clear and conspicuous” disclosures and give informed consent before their data can be sold to third parties. In the seven years that Wyden has been investigating data brokers, he said that he has not been made “aware of any company that provides such a warning to users before collecting their data.”

The FTC’s order came after reaching a settlement with a data broker called X-Mode, which admitted to selling sensitive location data without user consent and even to selling data after users revoked consent.

In his letter, Wyden referred to this order as the FTC outlining “new rules,” but that’s not exactly what happened. Instead of issuing rules, FTC settlements often serve as “common law,” signaling to marketplaces which practices violate laws like the FTC Act.

According to the FTC’s analysis of the order on its site, X-Mode violated the FTC Act by “unfairly selling sensitive data, unfairly failing to honor consumers’ privacy choices, unfairly collecting and using consumer location data, unfairly collecting and using consumer location data without consent verification, unfairly categorizing consumers based on sensitive characteristics for marketing purposes, deceptively failing to disclose use of location data, and providing the means and instrumentalities to engage in deceptive acts or practices.”

The FTC declined to comment on whether the order also applies to data purchases by intelligence agencies. In defining “location data,” the FTC order seems to carve out exceptions for any data collected outside the US and used for either “security purposes” or “national security purposes conducted by federal agencies or other federal entities.”

NSA must purge data, Wyden says

NSA officials told Wyden that not only is the intelligence agency purchasing data on Americans located in the US but that it also bought Americans’ Internet metadata.

Wyden warned that the former “can reveal sensitive, private information about a person based on where they go on the Internet, including visiting websites related to mental health resources, resources for survivors of sexual assault or domestic abuse, or visiting a telehealth provider who focuses on birth control or abortion medication.” And the latter “can be equally sensitive.”

To fix the problem, Wyden wants intelligence communities to agree to inventory and then “promptly” purge the data that they allegedly illegally collected on Americans without a warrant. Wyden said that this process has allowed agencies like the NSA and the FBI “in effect” to use “their credit card to circumvent the Fourth Amendment.”

X-Mode’s practices, the FTC said, were likely to cause “substantial injury to consumers that are not outweighed by countervailing benefits to consumers or competition and are not reasonably avoidable by consumers themselves.” Wyden’s spokesperson, Keith Chu, told Ars that “the data brokers selling Internet records to the government appear to engage in nearly identical conduct” to X-Mode.

The FTC’s order also indicates “that Americans must be told and agree to their data being sold to ‘government contractors for national security purposes’ for the practice to be allowed,” Wyden said.

DoD defends shady data broker dealings

In response to Wyden’s letter to Haines, the Under Secretary of Defense for Intelligence & Security, Ronald Moultrie, said that the Department of Defense (DoD) “adheres to high standards of privacy and civil liberties protections” when buying Americans’ location data. He also said that he was “not aware of any requirement in US law or judicial opinion” forcing the DoD to “obtain a court order in order to acquire, access, or use” commercially available information that “is equally available for purchase to foreign adversaries, US companies, and private persons as it is to the US government.”

In another response to Wyden, NSA leader General Paul Nakasone told Wyden that the “NSA takes steps to minimize the collection of US person information” and “continues to acquire only the most useful data relevant to mission requirements.” That includes some commercially available information on Americans “where one side of the communications is a US Internet Protocol address and the other is located abroad,” data which Nakasone said is “critical to protecting the US Defense Industrial Base” that sustains military weapons systems.

While the FTC has so far cracked down on a few data brokers, Wyden believes that the shady practice of selling data without Americans’ informed consent is an “industry-wide” problem in need of regulation. Rather than being a customer in this sketchy marketplace, intelligence agencies should stop funding companies allegedly guilty of what the FTC has described as “intrusive” and “unchecked” surveillance of Americans, Wyden said.

According to Moultrie, DNI Haines decides what information sources are “relevant and appropriate” to aid intelligence agencies.

But Wyden believes that Americans should have the opportunity to opt out of consenting to such invasive, secretive data collection. He said that by purchasing data from shady brokers, US intelligence agencies have helped create a world where consumers have no opportunity to consent to intrusive tracking.

“The secrecy around data purchases was amplified because intelligence agencies have sought to keep the American people in the dark,” Wyden told Haines.

NSA finally admits to spying on Americans by purchasing sensitive data Read More »

amazon-ring-stops-letting-police-request-footage-in-neighbors-app-after-outcry

Amazon Ring stops letting police request footage in Neighbors app after outcry

Neighborhood watch —

Warrantless access may still be granted during vaguely defined “emergencies.”

Amazon Ring stops letting police request footage in Neighbors app after outcry

Amazon Ring has shut down a controversial feature in its community safety app Neighbors that has allowed police to contact homeowners and request doorbell and surveillance camera footage without a warrant for years.

In a blog, head of the Neighbors app Eric Kuhn confirmed that “public safety agencies like fire and police departments can still use the Neighbors app to share helpful safety tips, updates, and community events,” but the Request for Assistance (RFA) tool will be disabled.

“They will no longer be able to use the RFA tool to request and receive video in the app,” Kuhn wrote.

Kuhn did not explain why Neighbors chose to “sunset” the RFA tool, but privacy advocates and lawmakers have long criticized Ring for helping to expand police surveillance in communities, seemingly threatening privacy and enabling racial profiling, CNBC reported. Among the staunchest critics of Ring’s seemingly tight relationship with law enforcement is the Electronic Frontier Foundation (EFF), which has long advocated for Ring and its users to stop sharing footage with police without a warrant.

In a statement provided to Ars, EFF senior policy analyst Matthew Guariglia noted that Ring had launched the RFA tool after EFF and other organizations had criticized Ring for allowing police to privately email warrantless requests for footage in the Neighbors app. Rather than end requests through the app entirely, Ring appeared to see the RFA tool as a middle ground, providing transparency about how many requests were being made, without ending police access to community members readily sharing footage on the app.

“Now, Ring hopefully will altogether be out of the business of platforming casual and warrantless police requests for footage to its users,” Guariglia said.

Moving forward, police and public safety agencies with warrants will still be able to request footage, which Amazon documents in transparency reports published every six months. These reports show thousands of search warrant requests and even more “preservation requests,” which allow government agencies to request to preserve user information for up to 90 days, “pending the receipt of a legally valid and binding order.”

“If we are legally required to comply, we will provide information responsive to the government demand,” Ring’s website says.

Ring rebrand embraces “hope and joy”

Guariglia said that Ring sunsetting the RFA tool “is a step in the right direction,” but it has “come after years of cozy relationships with police and irresponsible handling of data” that has, for many, damaged trust in Ring.

In 2022, EFF reported that Ring admitted that “there are ’emergency’ instances when police can get warrantless access to Ring personal devices without the owner’s permission.” And last year, Ring reached a $5.8 million settlement with the Federal Trade Commission, refunding customers for what the FTC described as “compromising its customers’ privacy by allowing any employee or contractor to access consumers’ private videos and by failing to implement basic privacy and security protections, enabling hackers to take control of consumers’ accounts, cameras, and videos.”

Because of this history, Guariglia said that EFF is “still deeply skeptical about law enforcement’s and Ring’s ability to determine what is, or is not, an emergency that requires the company to hand over footage without a warrant or user consent.”

EFF recommends additional steps that Ring could take to enhance user privacy, like enabling end-to-end encryption by default and turning off default audio collection, Guariglia said.

Bloomberg noted that this change to the Neighbors app comes after a new CEO, Liz Hamren, came on board, announcing that last year “Ring was rethinking its mission statement.” Because Ring was adding indoor and backyard home monitoring and business services, the company’s initial mission statement—”to reduce crime in neighborhoods”—was no longer, as founding Ring CEO Jamie Siminoff had promoted it, “at the core” of what Ring does.

In Kuhn’s blog, barely any attention is given to ending the RFA tool. A Ring spokesperson declined to tell Ars how many users had volunteered to use the tool, so it remains unclear how popular it was.

Rather than clarifying the RFA tool controversy, Kuhn’s blog primarily focused on describing how much Ring users loved “heartwarming or silly” footage like a “bear relaxing in a pool.” Under Hamren and Kuhn’s guidance, it appears that the Neighbors app is embracing a new mission of connecting communities to find “hope and joy” in their areas by adding new features to Neighbors like Moments and Best of Ring.

By contrast, when Ring introduced the RFA tool, it said that its mission was “to make neighborhoods safer for everyone.” On a help page, Ring bragged that police had used Neighbors to recover stolen guns and medical supplies. Because of these selling points, Ring’s community safety features may still be priorities for some users. So, while Ring may be ready to move on from highlighting its partnership with law enforcement as a “core” part of its service, its users may still be used to seeing their cameras as tools that should be readily accessible to police.

As law enforcement agencies lose access to Neighbors’ RFA tool, Guariglia said that it’s important to raise awareness among Ring owners that police can’t demand access to footage without a warrant.

“This announcement will not stop police from trying to get Ring footage directly from device owners without a warrant,” Guariglia said. “Ring users should also know that when police knock on their door, they have the right to, and should, request that police get a warrant before handing over footage.”

Amazon Ring stops letting police request footage in Neighbors app after outcry Read More »

patreon:-blocking-platforms-from-sharing-user-video-data-is-unconstitutional

Patreon: Blocking platforms from sharing user video data is unconstitutional

Patreon: Blocking platforms from sharing user video data is unconstitutional

Patreon, a monetization platform for content creators, has asked a federal judge to deem unconstitutional a rarely invoked law that some privacy advocates consider one of the nation’s “strongest protections of consumer privacy against a specific form of data collection.” Such a ruling would end decades that the US spent carefully shielding the privacy of millions of Americans’ personal video viewing habits.

The Video Privacy Protection Act (VPPA) blocks businesses from sharing data with third parties on customers’ video purchases and rentals. At a minimum, the VPPA requires written consent each time a business wants to share this sensitive video data—including the title, description, and, in most cases, the subject matter.

The VPPA was passed in 1988 in response to backlash over a reporter sharing the video store rental history of a judge, Robert Bork, who had been nominated to the Supreme Court by Ronald Reagan. The report revealed that Bork apparently liked spy thrillers and British costume dramas and suggested that maybe the judge had a family member who dug John Hughes movies.

Although the videos that Bork rented “revealed nothing particularly salacious” about the judge, the intent of reporting the “Bork Tapes” was to confront the judge “with his own vulnerability to privacy harms” during a time when the Supreme Court nominee had “criticized the constitutional right to privacy” as “a loose canon in the law,” Harvard Law Review noted.

Even though no harm was caused by sharing the “Bork Tapes,” policymakers on both sides of the aisle agreed that First Amendment protections ought to safeguard the privacy of people’s viewing habits, or else risk chilling their speech by altering their viewing habits. The US government has not budged on this stance since, supporting a lawsuit filed in 2022 by Patreon users who claimed that while no harms were caused, damages are owed after Patreon allegedly violated the VPPA by sharing data on videos they watched on the platform with Facebook through Meta Pixel without users’ written consent.

“Restricting the ability of those who possess a consumer’s video purchase, rental, or request history to disclose such information directly advances the goal of keeping that information private and protecting consumers’ intellectual freedom,” the Department of Justice’s brief said.

The Meta Pixel is a piece of code used by companies like Patreon to better target content to users by tracking their activity and monitoring conversions on Meta platforms. “In simplest terms,” Patreon users said in an amended complaint, “the Pixel allows Meta to know what video content one of its users viewed on Patreon’s website.”

The Pixel is currently at the center of a pile of privacy lawsuits, where people have accused various platforms of using the Pixel to covertly share sensitive data without users’ consent, including health and financial data.

Several lawsuits have specifically lobbed VPPA claims, which users have argued validates the urgency of retaining the VPPA protections that Patreon now seeks to strike. The DOJ argued that “the explosion of recent VPPA cases” is proof “that the disclosures the statute seeks to prevent are a legitimate concern,” despite Patreon’s arguments that the statute does “nothing to materially or directly advance the privacy interests it supposedly was enacted to protect.”

Patreon’s attack on the VPPA

Patreon has argued in a recent court filing that the VPPA was not enacted to protect average video viewers from embarrassing and unwarranted disclosures but “for the express purpose of silencing disclosures about political figures and their video-watching, an issue of undisputed continuing public interest and concern.”

That’s one of many ways that the VPPA silences speech, Patreon argued, by allegedly preventing disclosures regarding public figures that are relevant to public interest.

Among other “fatal flaws,” Patreon alleged, the VPPA “restrains speech” while “doing little if anything to protect privacy” and never protecting privacy “by the least restrictive means.”

Patreon claimed that the VPPA is too narrow, focusing only on pre-recorded videos. It prevents video service providers from disclosing to any other person the titles of videos that someone watched, but it does not necessarily stop platforms from sharing information about “the genres, performers, directors, political views, sexual content, and every other detail of pre-recorded video that those consumers watch,” Patreon claimed.

Patreon: Blocking platforms from sharing user video data is unconstitutional Read More »