gdpr

opinion:-how-to-design-a-us-data-privacy-law

Opinion: How to design a US data privacy law

robust privacy protection —

Op-ed: Why you should care about the GDPR, and how the US could develop a better version.

General data protection regulation GDPR logo on padlock with blue color background.

Nick Dedeke is an associate teaching professor at Northeastern University, Boston. His research interests include digital transformation strategies, ethics, and privacy. His research has been published in IEEE Management Review, IEEE Spectrum, and the Journal of Business Ethics. He holds a PhD in Industrial Engineering from the University of Kaiserslautern-Landau, Germany. The opinions in this piece do not necessarily reflect the views of Ars Technica.

In an earlier article, I discussed a few of the flaws in Europe’s flagship data privacy law, the General Data Protection Regulation (GDPR). Building on that critique, I would now like to go further, proposing specifications for developing a robust privacy protection regime in the US.

Writers must overcome several hurdles to have a chance at persuading readers about possible flaws in the GDPR. First, some readers are skeptical of any piece criticizing the GDPR because they believe the law is still too young to evaluate. Second, some are suspicious of any piece criticizing the GDPR because they suspect that the authors might be covert supporters of Big Tech’s anti-GDPR agenda. (I can assure readers that I am not, nor have I ever, worked to support any agenda of Big Tech companies.)

In this piece, I will highlight the price of ignoring the GDPR. Then, I will present several conceptual flaws of the GDPR that have been acknowledged by one of the lead architects of the law. Next, I will propose certain characteristics and design requirements that countries like the United States should consider when developing a privacy protection law. Lastly, I provide a few reasons why everyone should care about this project.

The high price of ignoring the GDPR

People sometimes assume that the GDPR is mostly a “bureaucratic headache”—but this perspective is no longer valid. Consider the following actions by administrators of the GDPR in different countries.

  • In May 2023, the Irish authorities hit Meta with a fine of $1.3 billion for unlawfully transferring personal data from the European Union to the US.
  • On July 16, 2021, the Luxembourg National Commission for Data Protection (CNDP) issued a fine of 746 million euros ($888 million) to Amazon Inc. The fine was issued due to a complaint from 10,000 people against Amazon in May 2018 orchestrated by a French privacy rights group.
  • On September 5, 2022, Ireland’s Data Protection Commission (DPC) issued a 405 million-euro GDPR fine to Meta Ireland as a penalty for violating GDPR’s stipulation regarding the lawfulness of children’s data (see other fines here).

In other words, the GDPR is not merely a bureaucratic matter; it can trigger hefty, unexpected fines. The notion that the GDPR can be ignored is a fatal error.

9 conceptual flaws of the GDPR: Perspective of the GDPR’s lead architect

Axel Voss is one of the lead architects of the GDPR. He is a member of the European Parliament and authored the 2011 initiative report titled “Comprehensive Approach to Personal Data Protection in the EU” when he was the European Parliament’s rapporteur. His call for action resulted in the development of the GDPR legislation. After observing the unfulfilled promises of the GDPR, Voss wrote a position paper highlighting the law’s weaknesses. I want to mention nine of the flaws that Voss described.

First, while the GDPR was excellent in theory and pointed a path toward the improvement of standards for data protection, it is an overly bureaucratic law created largely using a top-down approach by EU bureaucrats.

Second, the law is based on the premise that data protection should be a fundamental right of EU persons. Hence, the stipulations are absolute and one-sided or laser-focused only on protecting the “fundamental rights and freedoms” of natural persons. In making this change, the GDPR architects have transferred the relationship between the state and the citizen and applied it to the relationship between citizens and companies and the relationship between companies and their peers. This construction is one reason why the obligations imposed on data controllers and processors are rigid.

Third, the GDPR law aims to empower the data subjects by giving them rights and enshrining these rights into law. Specifically, the law enshrines nine data subject rights into law. They are: the right to be informed, the right to access, the right to rectification, the right to be forgotten/or to erasure, the right to data portability, the right to restrict processing, the right to object to the processing of personal data, the right to object to automated processing and the right to withdraw consent. As with any list, there is always a concern that some rights may be missing. If critical rights are omitted from the GDPR, it would hinder the effectiveness of the law in protecting privacy and data protection. Specifically, in the case of the GDPR, the protected data subject rights are not exhaustive.

Fourth, the GDPR is grounded on a prohibition and limitation approach to data protection. For example, the principle of purpose limitation excludes chance discoveries in science. This ignores the reality that current technologies, e.g., machine learning and artificial Intelligence applications, function differently. Hence, these old data protection mindsets, such as data minimization and storage limitation, are not workable anymore.

Fifth, the GDPR, on principle, posits that every processing of personal data restricts the data subject’s right to data protection. It requires, therefore, that each of these processes needs a justification based on the law. The GDPR deems any processing of personal data as a potential risk and forbids its processing in principle. It only allows processing if a legal ground is met. Such an anti-processing and anti-sharing approach may not make sense in a data-driven economy.

Sixth, the law does not distinguish between low-risk and high-risk applications by imposing the same obligations for each type of data processing application, with a few exceptions requiring consultation of the Data Processing Administrator for high-risk applications.

Seventh, the GDPR also excludes exemptions for low-risk processing scenarios or when SMEs, startups, non-commercial entities, or private citizens are the data controllers. Further, there are no exemptions or provisions that protect the rights of the controller and of third parties for such scenarios in which the data controller has a legitimate interest in protecting business and trade secrets, fulfilling confidentiality obligations, or the economic interest in avoiding huge and disproportionate efforts to meet GDPR obligations.

Eighth, the GDPR lacks a mechanism that allows SMEs and startups to shift the compliance burden onto third parties, which then store and process data.

Ninth, the GPR relies heavily on government-based bureaucratic monitoring and administration of GDPR privacy compliance. This means an extensive bureaucratic system is needed to manage the compliance regime.

There are other issues with GDPR enforcement (see pieces by Matt Burgess and Anda Bologa) and its negative impacts on the EU’s digital economy and on Irish technology companies. This piece will focus only on the nine flaws described above. These nine flaws are some of the reasons why the US authorities should not simply copy the GDPR.

The good news is that many of these flaws can be resolved.

Opinion: How to design a US data privacy law Read More »

cops’-favorite-face-image-search-engine-fined-$33m-for-privacy-violation

Cops’ favorite face image search engine fined $33M for privacy violation

Cops’ favorite face image search engine fined $33M for privacy violation

A controversial facial recognition tech company behind a vast face image search engine widely used by cops has been fined approximately $33 million in the Netherlands for serious data privacy violations.

According to the Dutch Data Protection Authority (DPA), Clearview AI “built an illegal database with billions of photos of faces” by crawling the web and without gaining consent, including from people in the Netherlands.

Clearview AI’s technology—which has been banned in some US cities over concerns that it gives law enforcement unlimited power to track people in their daily lives—works by pulling in more than 40 billion face images from the web without setting “any limitations in terms of geographical location or nationality,” the Dutch DPA found. Perhaps most concerning, the Dutch DPA said, Clearview AI also provides “facial recognition software for identifying children,” therefore indiscriminately processing personal data of minors.

Training on the face image data, the technology then makes it possible to upload a photo of anyone and search for matches on the Internet. People appearing in search results, the Dutch DPA found, can be “unambiguously” identified. Billed as a public safety resource accessible only by law enforcement, Clearview AI’s face database casts too wide a net, the Dutch DPA said, with the majority of people pulled into the tool likely never becoming subject to a police search.

“The processing of personal data is not only complex and extensive, it moreover offers Clearview’s clients the opportunity to go through data about individual persons and obtain a detailed picture of the lives of these individual persons,” the Dutch DPA said. “These processing operations therefore are highly invasive for data subjects.”

Clearview AI had no legitimate interest under the European Union’s General Data Protection Regulation (GDPR) for the company’s invasive data collection, Dutch DPA Chairman Aleid Wolfsen said in a press release. The Dutch official likened Clearview AI’s sprawling overreach to “a doom scenario from a scary film,” while emphasizing in his decision that Clearview AI has not only stopped responding to any requests to access or remove data from citizens in the Netherlands, but across the EU.

“Facial recognition is a highly intrusive technology that you cannot simply unleash on anyone in the world,” Wolfsen said. “If there is a photo of you on the Internet—and doesn’t that apply to all of us?—then you can end up in the database of Clearview and be tracked.”

To protect Dutch citizens’ privacy, the Dutch DPA imposed a roughly $33 million fine that could go up by about $5.5 million if Clearview AI does not follow orders on compliance. Any Dutch businesses attempting to use Clearview AI services could also face “hefty fines,” the Dutch DPA warned, as that “is also prohibited” under the GDPR.

Clearview AI was given three months to appoint a representative in the EU to stop processing personal data—including sensitive biometric data—in the Netherlands and to update its privacy policies to inform users in the Netherlands of their rights under the GDPR. But the company only has one month to resume processing requests for data access or removals from people in the Netherlands who otherwise find it “impossible” to exercise their rights to privacy, the Dutch DPA’s decision said.

It appears that Clearview AI has no intentions to comply, however. Jack Mulcaire, the chief legal officer for Clearview AI, confirmed to Ars that the company maintains that it is not subject to the GDPR.

“Clearview AI does not have a place of business in the Netherlands or the EU, it does not have any customers in the Netherlands or the EU, and does not undertake any activities that would otherwise mean it is subject to the GDPR,” Mulcaire said. “This decision is unlawful, devoid of due process and is unenforceable.”

But the Dutch DPA found that GDPR applies to Clearview AI because it gathers personal information about Dutch citizens without their consent and without ever alerting users to the data collection at any point.

“People who are in the database also have the right to access their data,” the Dutch DPA said. “This means that Clearview has to show people which data the company has about them, if they ask for this. But Clearview does not cooperate in requests for access.”

Dutch DPA vows to investigate Clearview AI execs

In the press release, Wolfsen said that the Dutch DPA has “to draw a very clear line” underscoring the “incorrect use of this sort of technology” after Clearview AI refused to change its data collection practices following fines in other parts of the European Union, including Italy and Greece.

While Wolfsen acknowledged that Clearview AI could be used to enhance police investigations, he said that the technology would be more appropriate if it was being managed by law enforcement “in highly exceptional cases only” and not indiscriminately by a private company.

“The company should never have built the database and is insufficiently transparent,” the Dutch DPA said.

Although Clearview AI appears ready to defend against the fine, the Dutch DPA said that the company failed to object to the decision within the provided six-week timeframe and therefore cannot appeal the decision.

Further, the Dutch DPA confirmed that authorities are “looking for ways to make sure that Clearview stops the violations” beyond the fines, including by “investigating if the directors of the company can be held personally responsible for the violations.”

Wolfsen claimed that such “liability already exists if directors know that the GDPR is being violated, have the authority to stop that, but omit to do so, and in this way consciously accept those violations.”

Cops’ favorite face image search engine fined $33M for privacy violation Read More »

meta-defends-charging-fee-for-privacy-amid-showdown-with-eu

Meta defends charging fee for privacy amid showdown with EU

Meta defends charging fee for privacy amid showdown with EU

Meta continues to hit walls with its heavily scrutinized plan to comply with the European Union’s strict online competition law, the Digital Markets Act (DMA), by offering Facebook and Instagram subscriptions as an alternative for privacy-inclined users who want to opt out of ad targeting.

Today, the European Commission (EC) announced preliminary findings that Meta’s so-called “pay or consent” or “pay or OK” model—which gives users a choice to either pay for access to its platforms or give consent to collect user data to target ads—is not compliant with the DMA.

According to the EC, Meta’s advertising model violates the DMA in two ways. First, it “does not allow users to opt for a service that uses less of their personal data but is otherwise equivalent to the ‘personalized ads-based service.” And second, it “does not allow users to exercise their right to freely consent to the combination of their personal data,” the press release said.

Now, Meta will have a chance to review the EC’s evidence and defend its policy, with today’s findings kicking off a process that will take months. The EC’s investigation is expected to conclude next March. Thierry Breton, the commissioner for the internal market, said in the press release that the preliminary findings represent “another important step” to ensure Meta’s full compliance with the DMA.

“The DMA is there to give back to the users the power to decide how their data is used and ensure innovative companies can compete on equal footing with tech giants on data access,” Breton said.

A Meta spokesperson told Ars that Meta plans to fight the findings—which could trigger fines up to 10 percent of the company’s worldwide turnover, as well as fines up to 20 percent for repeat infringement if Meta loses.

Meta continues to claim that its “subscription for no ads” model was “endorsed” by the highest court in Europe, the Court of Justice of the European Union (CJEU), last year.

“Subscription for no ads follows the direction of the highest court in Europe and complies with the DMA,” Meta’s spokesperson said. “We look forward to further constructive dialogue with the European Commission to bring this investigation to a close.”

However, some critics have noted that the supposed endorsement was not an official part of the ruling and that particular case was not regarding DMA compliance.

The EC agreed that more talks were needed, writing in the press release, “the Commission continues its constructive engagement with Meta to identify a satisfactory path towards effective compliance.”

Meta defends charging fee for privacy amid showdown with EU Read More »

meta-halts-plans-to-train-ai-on-facebook,-instagram-posts-in-eu

Meta halts plans to train AI on Facebook, Instagram posts in EU

Not so fast —

Meta was going to start training AI on Facebook and Instagram posts on June 26.

Meta halts plans to train AI on Facebook, Instagram posts in EU

Meta has apparently paused plans to process mounds of user data to bring new AI experiences to Europe.

The decision comes after data regulators rebuffed the tech giant’s claims that it had “legitimate interests” in processing European Union- and European Economic Area (EEA)-based Facebook and Instagram users’ data—including personal posts and pictures—to train future AI tools.

There’s not much information available yet on Meta’s decision. But Meta’s EU regulator, the Irish Data Protection Commission (DPC), posted a statement confirming that Meta made the move after ongoing discussions with the DPC about compliance with the EU’s strict data privacy laws, including the General Data Protection Regulation (GDPR).

“The DPC welcomes the decision by Meta to pause its plans to train its large language model using public content shared by adults on Facebook and Instagram across the EU/EEA,” the DPC said. “This decision followed intensive engagement between the DPC and Meta. The DPC, in co-operation with its fellow EU data protection authorities, will continue to engage with Meta on this issue.”

The European Center for Digital Rights, known as Noyb, had filed 11 complaints across the EU and intended to file more to stop Meta from moving forward with its AI plans. The DPC initially gave Meta AI the green light to proceed but has now made a U-turn, Noyb said.

Meta’s policy still requires update

In a blog, Meta had previously teased new AI features coming to the EU, including everything from customized stickers for chats and stories to Meta AI, a “virtual assistant you can access to answer questions, generate images, and more.” Meta had argued that training on EU users’ personal data was necessary so that AI services could reflect “the diverse cultures and languages of the European communities who will use them.”

Before the pause, the company had been hoping to rely “on the legal basis of ‘legitimate interests’” to process the data, because it’s needed “to improve AI at Meta.” But Noyb and EU data regulators had argued that Meta’s legal basis did not comply with the GDPR, with the Norwegian Data Protection Authority arguing that “the most natural thing would have been to ask the users for their consent before their posts and images are used in this way.”

Rather than ask for consent, however, Meta had given EU users until June 26 to opt out. Noyb had alleged that in going this route, Meta planned to use “dark patterns” to thwart AI opt-outs in the EU and collect as much data as possible to fuel undisclosed AI technologies. Noyb urgently argued that once users’ data is in the system, “users seem to have no option of ever having it removed.”

Noyb said that the “obvious explanation” for Meta seemingly halting its plans was pushback from EU officials, but the privacy advocacy group also warned EU users that Meta’s privacy policy has not yet been fully updated to reflect the pause.

“We welcome this development but will monitor this closely,” Max Schrems, Noyb chair, said in a statement provided to Ars. “So far there is no official change of the Meta privacy policy, which would make this commitment legally binding. The cases we filed are ongoing and will need a determination.”

Ars was not immediately able to reach Meta for comment.

Meta halts plans to train AI on Facebook, Instagram posts in EU Read More »

facebook,-instagram-may-cut-fees-by-nearly-50%-in-scramble-for-dma-compliance

Facebook, Instagram may cut fees by nearly 50% in scramble for DMA compliance

Facebook, Instagram may cut fees by nearly 50% in scramble for DMA compliance

Meta is considering cutting monthly subscription fees for Facebook and Instagram users in the European Union nearly in half to comply with the Digital Market Act (DMA), Reuters reported.

During a day-long public workshop on Meta’s DMA compliance, Meta’s competition and regulatory director, Tim Lamb, told the European Commission (EC) that individual subscriber fees could be slashed from 9.99 euros to 5.99 euros. Meta is hoping that reducing fees will help to speed up the EC’s process for resolving Meta’s compliance issues. If Meta’s offer is accepted, any additional accounts would then cost 4 euros instead of 6 euros.

Lamb said that these prices are “by far the lowest end of the range that any reasonable person should be paying for services of these quality,” calling it a “serious offer.”

The DMA requires that Meta’s users of Facebook, Instagram, Facebook Messenger, and Facebook Marketplace “freely” give consent to share data used for ad targeting without losing access to the platform if they’d prefer not to share data. That means services must provide an acceptable alternative for users who don’t consent to data sharing.

“Gatekeepers should enable end users to freely choose to opt-in to such data processing and sign-in practices by offering a less personalized but equivalent alternative, and without making the use of the core platform service or certain functionalities thereof conditional upon the end user’s consent,” the DMA says.

Designated gatekeepers like Meta have debated what it means for a user to “freely” give consent, suggesting that offering a paid subscription for users who decline to share data would be one route for Meta to continue offering high-quality services without routinely hoovering up data on all its users.

But EU privacy advocates like NOYB have protested Meta’s plan to offer a subscription model instead of consenting to data sharing, calling it a “pay or OK model” that forces Meta users who cannot pay the fee to consent to invasive data sharing they would otherwise decline. In a statement shared with Ars, NOYB chair Max Schrems said that even if Meta reduced its fees to 1.99 euros, it would be forcing consent from 99.9 percent of users.

“We know from all research that even a fee of just 1.99 euros or less leads to a shift in consent from 3–10 percent that genuinely want advertisement to 99.9 percent that still click yes,” Schrems said.

In the EU, the General Data Protection Regulation (GDPR) “requires that consent must be ‘freely’ given,” Schrems said. “In reality, it is not about the amount of money—it is about the ‘pay or OK’ approach as a whole. The entire purpose of ‘pay or OK’, is to get users to click on OK, even if this is not their free and genuine choice. We do not think the mere change of the amount makes this approach legal.”

Where EU stands on subscription models

Meta expects that a subscription model is a legal alternative under the DMA. The tech giant said it was launching EU subscriptions last November after the Court of Justice of the European Union (CJEU) “endorsed the subscriptions model as a way for people to consent to data processing for personalized advertising.”

It’s unclear how popular the subscriptions have been at the current higher cost. Right now in the EU, monthly Facebook and Instagram subscriptions cost 9.99 euros per month on the web or 12.99 euros per month on iOS and Android, with additional fees of 6 euros per month on the web and 8 euros per month on iOS and Android for each additional account. Meta declined to comment on how many EU users have subscribed, noting to Ars that it has no obligation to do so.

In the CJEU case, the court was reviewing Meta’s GDPR compliance, which Schrems noted is less strict than the DMA. The CJEU specifically said that under the GDPR, “users must be free to refuse individually”—”in the context of” signing up for services— “to give their consent to particular data processing operations not necessary” for Meta to provide such services “without being obliged to refrain entirely from using the service.”

Facebook, Instagram may cut fees by nearly 50% in scramble for DMA compliance Read More »

vending-machine-error-reveals-secret-face-image-database-of-college-students

Vending machine error reveals secret face image database of college students

“Stupid M&M machines” —

Facial-recognition data is typically used to prompt more vending machine sales.

Vending machine error reveals secret face image database of college students

Aurich Lawson | Mars | Getty Images

Canada-based University of Waterloo is racing to remove M&M-branded smart vending machines from campus after outraged students discovered the machines were covertly collecting facial-recognition data without their consent.

The scandal started when a student using the alias SquidKid47 posted an image on Reddit showing a campus vending machine error message, “Invenda.Vending.FacialRecognitionApp.exe,” displayed after the machine failed to launch a facial recognition application that nobody expected to be part of the process of using a vending machine.

Reddit post shows error message displayed on a University of Waterloo vending machine (cropped and lightly edited for clarity).

Enlarge / Reddit post shows error message displayed on a University of Waterloo vending machine (cropped and lightly edited for clarity).

“Hey, so why do the stupid M&M machines have facial recognition?” SquidKid47 pondered.

The Reddit post sparked an investigation from a fourth-year student named River Stanley, who was writing for a university publication called MathNEWS.

Stanley sounded alarm after consulting Invenda sales brochures that promised “the machines are capable of sending estimated ages and genders” of every person who used the machines without ever requesting consent.

This frustrated Stanley, who discovered that Canada’s privacy commissioner had years ago investigated a shopping mall operator called Cadillac Fairview after discovering some of the malls’ informational kiosks were secretly “using facial recognition software on unsuspecting patrons.”

Only because of that official investigation did Canadians learn that “over 5 million nonconsenting Canadians” were scanned into Cadillac Fairview’s database, Stanley reported. Where Cadillac Fairview was ultimately forced to delete the entire database, Stanley wrote that consequences for collecting similarly sensitive facial recognition data without consent for Invenda clients like Mars remain unclear.

Stanley’s report ended with a call for students to demand that the university “bar facial recognition vending machines from campus.”

A University of Waterloo spokesperson, Rebecca Elming, eventually responded, confirming to CTV News that the school had asked to disable the vending machine software until the machines could be removed.

Students told CTV News that their confidence in the university’s administration was shaken by the controversy. Some students claimed on Reddit that they attempted to cover the vending machine cameras while waiting for the school to respond, using gum or Post-it notes. One student pondered whether “there are other places this technology could be being used” on campus.

Elming was not able to confirm the exact timeline for when machines would be removed other than telling Ars it would happen “as soon as possible.” She told Ars she is “not aware of any similar technology in use on campus.” And for any casual snackers on campus wondering, when, if ever, students could expect the vending machines to be replaced with snack dispensers not equipped with surveillance cameras, Elming confirmed that “the plan is to replace them.”

Invenda claims machines are GDPR-compliant

MathNEWS’ investigation tracked down responses from companies responsible for smart vending machines on the University of Waterloo’s campus.

Adaria Vending Services told MathNEWS that “what’s most important to understand is that the machines do not take or store any photos or images, and an individual person cannot be identified using the technology in the machines. The technology acts as a motion sensor that detects faces, so the machine knows when to activate the purchasing interface—never taking or storing images of customers.”

According to Adaria and Invenda, students shouldn’t worry about data privacy because the vending machines are “fully compliant” with the world’s toughest data privacy law, the European Union’s General Data Protection Regulation (GDPR).

“These machines are fully GDPR compliant and are in use in many facilities across North America,” Adaria’s statement said. “At the University of Waterloo, Adaria manages last mile fulfillment services—we handle restocking and logistics for the snack vending machines. Adaria does not collect any data about its users and does not have any access to identify users of these M&M vending machines.”

Under the GDPR, face image data is considered among the most sensitive data that can be collected, typically requiring explicit consent to collect, so it’s unclear how the machines may meet that high bar based on the Canadian students’ experiences.

According to a press release from Invenda, the maker of M&M candies, Mars, was a key part of Invenda’s expansion into North America. It was only after closing a $7 million funding round, including deals with Mars and other major clients like Coca-Cola, that Invenda could push for expansive global growth that seemingly vastly expands its smart vending machines’ data collection and surveillance opportunities.

“The funding round indicates confidence among Invenda’s core investors in both Invenda’s corporate culture, with its commitment to transparency, and the drive to expand global growth,” Invenda’s press release said.

But University of Waterloo students like Stanley now question Invenda’s “commitment to transparency” in North American markets, especially since the company is seemingly openly violating Canadian privacy law, Stanley told CTV News.

On Reddit, while some students joked that SquidKid47’s face “crashed” the machine, others asked if “any pre-law students wanna start up a class-action lawsuit?” One commenter summed up students’ frustration by typing in all caps, “I HATE THESE MACHINES! I HATE THESE MACHINES! I HATE THESE MACHINES!”

Vending machine error reveals secret face image database of college students Read More »

meta-relents-to-eu,-allows-unlinking-of-facebook-and-instagram-accounts

Meta relents to EU, allows unlinking of Facebook and Instagram accounts

Meta relents to EU, allows unlinking of Facebook and Instagram accounts

Meta will allow some Facebook and Instagram users to unlink their accounts as part of the platform’s efforts to comply with the European Union’s Digital Markets Act (DMA) ahead of enforcement starting March 1.

In a blog, Meta’s competition and regulatory director, Tim Lamb, wrote that Instagram and Facebook users in the EU, the European Economic Area, and Switzerland would be notified in the “next few weeks” about “more choices about how they can use” Meta’s services and features, including new opportunities to limit data-sharing across apps and services.

Most significantly, users can choose to either keep their accounts linked or “manage their Instagram and Facebook accounts separately so that their information is no longer used across accounts.” Up to this point, linking user accounts had provided Meta with more data to more effectively target ads to more users. The perk of accessing data on Instagram’s widening younger user base, TechCrunch noted, was arguably the $1 billion selling point explaining why Facebook acquired Instagram in 2012.

Also announced today, users protected by the DMA will soon be able to separate their Facebook Messenger, Marketplace, and Gaming accounts. However, doing so will limit some social features available in some of the standalone apps.

While Messenger users choosing to disconnect the chat service from their Facebook accounts will still “be able to use Messenger’s core service offering such as private messaging and chat, voice and video calling,” Marketplace users making that same choice will have to email sellers and buyers, rather than using Facebook’s messenger service. And unlinked Gaming app users will only be able to play single-player games, severing their access to social gaming otherwise supported by linking the Gaming service to their Facebook social networks.

While Meta may have had choices other than depriving users unlinking accounts of some features, Meta didn’t really have a choice in allowing newly announced options to unlink accounts. The DMA specifically requires that very large platforms designated as “gatekeepers” give users the “specific choice” of opting out of sharing personal data across a platform’s different core services or across any separate services that the gatekeepers manage.

Without gaining “specific” consent, gatekeepers will no longer be allowed to “combine personal data from the relevant core platform service with personal data from any further core platform services” or “cross-use personal data from the relevant core platform service in other services provided separately by the gatekeeper,” the DMA says. The “specific” requirement is designed to block platforms from securing consent at sign-up, then hoovering up as much personal data as possible as new services are added in an endless pursuit of advertising growth.

As defined under the General Data Protection Regulation, the EU requiring “specific” consent stops platforms from gaining user consent for broadly defined data processing by instead establishing “the need for granularity,” so that platforms always seek consent for each “specific” data “processing purpose.”

“This is an important ‘safeguard against the gradual widening or blurring of purposes for which data is processed, after a data subject has agreed to the initial collection of the data,’” the European Data Protection Supervisor explained in public comments describing “commercial surveillance and data security practices that harm consumers” provided at the request of the FTC in 2022.

According to Meta’s help page, once users opt out of sharing data between apps and services, Meta will “stop combining your info across these accounts” within 15 days “after you’ve removed them.” However, all “previously combined info would remain combined.”

Meta relents to EU, allows unlinking of Facebook and Instagram accounts Read More »