age estimation

fury-over-discord’s-age-checks-explodes-after-shady-persona-test-in-uk

Fury over Discord’s age checks explodes after shady Persona test in UK


Persona confirmed all age-check data from Discord’s UK test was deleted.

Shortly after Discord announced that all users will soon be defaulted to teen experiences until their ages are verified, the messaging platform faced immediate backlash.

One of the major complaints was that Discord planned to collect more government IDs as part of its global age verification process. It shocked many that Discord would be so bold so soon after a third-party breach of a former age check partner’s services recently exposed 70,000 Discord users’ government IDs.

Attempting to reassure users, Discord claimed that most users wouldn’t have to show ID, instead relying on video selfies using AI to estimate ages, which raised separate privacy concerns. In the future, perhaps behavioral signals would override the need for age checks for most users, Discord suggested, seemingly downplaying the risk that sensitive data would be improperly stored.

Discord didn’t hide that it planned to continue requesting IDs for any user appealing an incorrect age assessment, and users weren’t happy, since that is exactly how the prior breach happened. Responding to critics, Discord claimed that the majority of ID data was promptly deleted. Specifically, Savannah Badalich, Discord’s global head of product policy, told The Verge that IDs shared during appeals “are deleted quickly—in most cases, immediately after age confirmation.”

It’s unsurprising then that backlash exploded after Discord posted, and then weirdly deleted, a disclaimer on an FAQ about Discord’s age assurance policies that contradicted Discord’s hyped short timeline for storing IDs. An archived version of the page shows the note shared this warning:

“Important: If you’re located in the UK, you may be part of an experiment where your information will be processed by an age-assurance vendor, Persona. The information you submit will be temporarily stored for up to 7 days, then deleted. For ID document verification, all details are blurred except your photo and date of birth, so only what’s truly needed for age verification is used.”

Critics felt that Discord was obscuring not just how long IDs may be stored, but also the entities collecting information. Discord did not provide details on what the experiment was testing or how many users were affected, and Persona was not listed as a partner on its platform.

Asked for comment, Discord told Ars that only a small number of users was included in the experiment, which ran for less than one month. That test has since concluded, Discord confirmed, and Persona is no longer an active vendor partnering with Discord. Moving forward, Discord promised to “keep our users informed as vendors are added or updated.”

While Discord seeks to distance itself from Persona, Rick Song, Persona’s CEO, has been stuck responding to the mounting backlash. Hoping to quell fears that any of the UK data collected during the experiment risked being breached, he told Ars that all the data of verified individuals involved in Discord’s test was deleted immediately upon verification.

Persona draws fire amid Discord fury

This all seemingly started after Discord was forced to find age verification solutions when Australia’s under-16 social media ban and the United Kingdom’s Online Safety Act came into effect.

It seems that in the UK, Discord struggled to find partners, as the messaging service wasn’t just trying to stop minors from accessing adult content but also needed to block adults from messaging minors.

Setting aside known issues with accuracy in today’s age estimation technology, there’s an often-overlooked nuance to how age solutions work, particularly when the safety of children is involved in platforms’ decisions. Age checks that are good enough to block kids from accessing adult content may not work as well as age checks to stop tech-savvy adults with malicious intentions bent on contacting minors; the UK’s OSA required that Discord’s age checks block both.

It seems likely that Discord expected Persona to be a partner that the UK’s OSA enforcers would approve. OSA had previously approved Persona as an age verification service on Reddit, which shares similarly complex age verification goals with Discord.

For Persona, the partnership came at a time when many Discord users globally were closely monitoring the service, trying to decided whehter they trusted Discord with their age check data.

After Discord shocked users by abruptly retracting the disclaimer about the Persona experiment, mistrust swelled, and scrutiny of Persona intensified.

On X and other social media platforms, critics warned that Palantir co-founder Peter Thiel’s Founders Fund was a major investor in Persona. They worried Thiel might have influence over Persona or access to Persona’s data, or, worse, that Thiel’s ties to the Trump administration might mean the government had access to it. Fearing that Discord data may one day be fed into government facial recognition systems, conspiracies swirled, increasing heat on Persona and leaving Song with no choice but to cautiously confront allegations.

Hackers probe Persona

Perhaps most problematic for Persona, the mass outrage prompted cybersecurity researchers to investigate. They quickly exposed a “workaround” to avoid Persona’s age checks on Discord, The Rage, an independent publication that covers financial surveillance, reported. But more concerning for privacy advocates, researchers also found the uncompressed of Persona’s frontend code “exposed to the open Internet on a US government authorized server.”

“In 2,456 publicly accessible files, the code revealed the extensive surveillance Persona software performs on its users, bundled in an interface that pairs facial recognition with financial reporting—and a parallel implementation that appears designed to serve federal agencies,” The Rage reported.

As The Rage reported, and Song confirmed to Ars, Persona does not currently have any government contracts. Instead, the exposed service “appears to be powered by an OpenAI chatbot,” The Rage noted.

OpenAI is highlighted as an active partner on Persona’s website, which claims Persona screens millions of users for OpenAI each month. According to The Rage, “the publicly exposed domain, titled ‘openai-watchlistdb.withpersona.com,’” appears to “query identity verification requests on an OpenAI database” that has a “FedRAMP-authorized parallel implementation of the software called ‘withpersona-gov.com.’”

Hackers warned “that OpenAI may have created an internal database for Persona identity checks that spans all OpenAI users via its internal watchlistdb,” seemingly exploiting the “opportunity to go from comparing users against a single federal watchlist, to creating the watchlist of all users themselves.”

In correspondence with one of the researchers, Song clarified that this product is based on publicly available records for sanctions and warnings, and the service does not store any user data sent to it.

OpenAI did not immediately respond to Ars’ request to comment.

Persona denies government, ICE ties

On Wednesday, Persona’s chief operating officer, Christie Kim, sought to reassure Persona customers as the Discord controversy grew. In an email, Kim said that Persona invests “heavily in infrastructure, compliance, and internal training to ensure sensitive data is handled responsibly,” and not exposed.

“Over the past week, multiple social media posts and online articles have circulated repeating misleading claims about Persona, insinuating conspiracies around our work with Discord and our investors,” Kim wrote.

Noting that Persona does not “typically engage with online speculation,” Kim said that the scandal required a direct response “because we operate in a sensitive space and your trust in us is foundational to our partnership.”

As expected, Kim noted that Persona is not partnered with federal agencies, including the Department of Homeland Security or Immigration and Customs Enforcement (ICE).

“Transparently, we are actively working on a couple of potential contracts which would be publicly visible if we move forward,” Kim wrote. “However, these engagements are strictly for workforce account security of government employees and do not include ICE or any agency within the Department of Homeland Security.”

Kim acknowledged that Thiel’s Founders Fund is an investor but said that investors do not have access to Persona data and that Thiel was not involved in Persona’s operations.

“He is not on our board, does not advise us, has no role in our operations or decision-making, and is not directly involved with Persona in any way,” Kim wrote. “Persona and Palantir share no board members and have no business relationship with each other.”

In the email, Kim confirmed that Persona was planning a press campaign to go on the defensive, speaking with media to clarify the narrative. She apologized for any inconvenience that the heightened scrutiny on the company’s services may have caused.

That scrutiny has likely spooked partners that may have previously gravitated to Persona as a partner that seems savvy about government approvals.

Persona combats ongoing trust issues

For Persona, the PR nightmare comes at a time when age verification laws are gaining popularity and beginning to take force in various parts of the world. Persona’s background in verifying identities for financial services to prevent fraud seems to make its services—which The Rage noted combine facial recognition with financial reporting—an appealing option for platforms seeking a solution that will appease regulators. Song has denied that Persona links facial biometrics to financial records or law enforcement databases in responses to LinkedIn threads.

But because of Persona’s background in financial services and fraud protection, its data retention policies—which require some data be retained for legal and audit purposes—will likely leave anyone uncomfortable with a tech company gathering a massive database of government IDs. Such databases are viewed as hugely attractive targets for bad actors behind costly breaches, and Discord’s users have already been burned once.

On X, Song responded to one of the hackers—a user named Celeste with the handle @vmfunc—aiming to provide more transparency into how Persona was addressing the flagged issues. In the thread, he shared screenshots of emails documenting his correspondence with Celeste over security concerns.

The correspondence showed that Celeste credited Persona for quickly fixing the front-end issue but also noted that it was hard to trust Persona’s story about government and Palantir ties, since the company wouldn’t put more information on the record. Additionally, Persona’s compliance team should be concerned that the company had not yet started an “in-depth security review,” Celeste said.

“Unfortunately, there is no way I can fully trust you here and you know this,” Celeste wrote, “but I’m trying to act in good faith” by explicitly stating that “we found zero references” to ICE or other entities concerning critics “in all source files we found.”

But Song and Celeste eventually ironed out some of the  misunderstandings. On Friday, Celeste posted on X that “I see a lot of misinformation going online about our recent post about Persona.” Later correspondence shared with Ars showed Celeste thanked Song for his honesty in responding to questions, noting that the CEO putting statements on the record countering the rumors carried weight in a situation where Persona’s claims couldn’t all necessarily be independently verified.

This story has been updated to include additional insights from Persona.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Fury over Discord’s age checks explodes after shady Persona test in UK Read More »

discord-faces-backlash-over-age-checks-after-data-breach-exposed-70,000-ids

Discord faces backlash over age checks after data breach exposed 70,000 IDs


Discord to block adult content unless users verify ages with selfies or IDs.

Discord is facing backlash after announcing that all users will soon be required to verify ages to access adult content by sharing video selfies or uploading government IDs.

According to Discord, it’s relying on AI technology that verifies age on the user’s device, either by evaluating a user’s facial structure or by comparing a selfie to a government ID. Although government IDs will be checked off-device, the selfie data will never leave the user’s device, Discord emphasized. Both forms of data will be promptly deleted after the user’s age is estimated.

In a blog, Discord confirmed that “a phased global rollout” would begin in “early March,” at which point all users globally would be defaulted to “teen-appropriate” experiences.

To unblur sensitive media or access age-restricted channels, the majority of users will likely have to undergo Discord’s age estimation process. Most users will only need to verify their ages once, Discord said, but some users “may be asked to use multiple methods, if more information is needed to assign an age group,” the blog said.

On social media, alarmed Discord users protested the move, doubting whether Discord could be trusted with their most sensitive information after Discord age verification data was recently breached. In October, hackers stole government IDs of 70,000 Discord users from a third-party service that Discord previously trusted to verify ages in the United Kingdom and Australia.

At that time, Discord told users that the hackers were hoping to use the stolen data to “extort a financial ransom from Discord.” In October, Ars Senior Security Editor Dan Goodin joined others warning that “the best advice for people who have submitted IDs to Discord or any other service is to assume they have been or soon will be stolen by hackers and put up for sale or used in extortion scams.”

For bad actors, Discord will likely only become a bigger target as more sensitive information is collected worldwide, users now fear.

It’s no surprise then that hundreds of Discord users on Reddit slammed the decision to expand age verification globally shortly after The Verge broke the news. On a PC gaming subreddit discussing alternative apps for gamers, one user wrote, “Hell, Discord has already had one ID breach, why the fuck would anyone verify on it after that?”

“This is how Discord dies,” another user declared. “Seriously, uploading any kind of government ID to a 3rd party company is just asking for identity theft on a global scale.”

Many users seem just as sketched out about sharing face scans. On the Discord app subreddit, some users vowed to never submit selfies or IDs, fearing that breaches may be inevitable and suspecting Discord of downplaying privacy risks while allowing data harvesting.

Who can access Discord age-check data?

Discord’s system is supposed to make sure that only users have access to their age-check data, which Discord said would never leave their phones.

The company is hoping to convince users that it has tightened security after the breach by partnering with k-ID, an increasingly popular age-check service provider that’s also used by social platforms from Meta and Snap.

However, self-described Discord users on Reddit aren’t so sure, with some going the extra step of picking apart k-ID’s privacy policy to understand exactly how age is verified without data ever leaving the device.

“The wording is pretty unclear and inconsistent even if you dig down to the k-ID privacy policy,” one Redditor speculated. “Seems that ID scans are uploaded to k-ID servers, they delete them, but they also mention using ‘trusted 3rd parties’ for verification, who may or may not delete it.” That user seemingly gave up on finding reassurances in either company’s privacy policies, noting that “everywhere along the chain it reads like ‘we don’t collect your data, we forward it to someone else… .’”

Discord did not immediately respond to Ars’ requests to comment directly on how age checks work without data leaving the device.

To better understand user concerns, Ars reviewed the privacy policies, noting that k-ID said its “facial age estimation” tool is provided by a Swiss company called Privately.

“We don’t actually see any faces that are processed via this solution,” k-ID’s policy said.

That part does seem vague, since Privately isn’t explicitly included in the “we” in that statement. Similarly, further down, the policy more clearly states that “neither k-ID nor its service providers collect any biometric information from users when they interact with the solution. k-ID only receives and stores the outcome of the age check process.” In that section, “service providers” seems to refer to partners like Discord, which integrate k-ID’s age checks, rather than third parties like Privately that actually conduct the age check.

Asked for comment, a k-ID spokesperson told Ars that “the Facial Age Estimation technology runs entirely on the user’s device in real time when they are performing the verification. That means there is no video or image transmitted, and the estimation happens locally. The only data to leave the device is a pass/fail of the age threshold which is what Discord receives (and some performance metrics that contain no personal data).”

K-ID’s spokesperson told Ars that no third parties store personal data shared during age checks.

“k-ID, does not receive personal data from Discord when performing age-assurance,” k-ID’s spokesperson said. “This is an intentional design choice grounded in data protection and data minimisation principles. There is no storage of personal data by k-ID or any third parties, regardless of the age assurance method used.”

Turning to Privately’s website, that offers a little more information on how on-device age estimation works, while providing likely more reassurances that data won’t leave devices.

Privately’s services were designed to minimize data collection and prioritize anonymity to comply with the European Union’s General Data Protection Regulation, Privately noted. “No user biometric or personal data is captured or transmitted,” Privately’s website said, while bragging that “our secret sauce is our ability to run very performant models on the user device or user browser to implement a privacy-centric solution.”

The company’s privacy policy offers slightly more detail, noting that the company avoids relying on the cloud while running AI models on local devices.

“Our technology is built using on-device edge-AI that facilitates data minimization so as to maximise user privacy and data protection,” the privacy policy said. “The machine learning based technology that we use (for age estimation and safeguarding) processes user’s data on their own devices, thereby avoiding the need for us or for our partners to export user’s personal data onto any form of cloud services.”

Additionally, the policy said, “our technology solutions are built to operate mostly on user devices and to avoid sending any of the user’s personal data to any form of cloud service. For this we use specially adapted machine learning models that can be either deployed or downloaded on the user’s device. This avoids the need to transmit and retain user data outside the user device in order to provide the service.”

Finally, Privately explained that it also employs a “double blind” implementation to avoid knowing the origin of age estimation requests. That supposedly ensures that Privately only knows the result of age checks and cannot connect the result to a user on a specific platform.

Discord expects to lose users

Some Discord users may never be asked to verify their ages, even if they try to access age-restricted content. Savannah Badalich, Discord’s global head of product policy, told The Verge that Discord “is also rolling out an age inference model that analyzes metadata, like the types of games a user plays, their activity on Discord, and behavioral signals like signs of working hours or the amount of time they spend on Discord.”

“If we have a high confidence that they are an adult, they will not have to go through the other age verification flows,” Badalich said.

Badalich confirmed that Discord is bracing for some users to leave Discord over the update but suggested that “we’ll find other ways to bring users back.”

On Reddit, Discord users complained that age verification is easy to bypass, forcing adults to share sensitive information without keeping kids away from harmful content. In Australia, where Discord’s policy first rolled out, some kids claimed that Discord never even tried to estimate their ages, while others found it easy to trick k-ID by using AI videos or altering their appearances to look older. A teen girl relied on fake eyelashes to do the trick, while one 13-year-old boy was estimated to be over 30 years old after scrunching his face to seem more wrinkled.

Badalich told The Verge that Discord doesn’t expect the tools to work perfectly but acts quickly to block workarounds, like teens using Death Stranding‘s photo mode to skirt age gates. However, questions remain about the accuracy of Discord’s age estimation model in assessing minors’ ages, in particular.

It may be noteworthy that Privately only claims that its technology is “proven to be accurate to within 1.3 years, for 18-20-year-old faces, regardless of a customer’s gender or ethnicity.” But experts told Ars last year that flawed age-verification technology still frequently struggles to distinguish minors from adults, especially when differentiating between a 17- and 18-year-old, for example.

Perhaps notably, Discord’s prior scandal occurred after hackers stole government IDs that users shared as part of the appeal process in order to fix an incorrect age estimation. Appeals could remain the most vulnerable part of this process, The Verge’s report indicated. Badalich confirmed that a third-party vendor would be reviewing appeals, with the only reassurance for users seemingly that IDs shared during appeals “are deleted quicklyin most cases, immediately after age confirmation.”

On Reddit, Discord fans awaiting big changes remain upset. A disgruntled Discord user suggested that “corporations like Facebook and Discord, will implement easily passable, cheapest possible, bare minimum under the law verification, to cover their ass from a lawsuit,” while forcing users to trust that their age-check data is secure.

Another user joked that she’d be more willing to trust that selfies never leave a user’s device if Discord were “willing to pay millions to every user” whose “scan does leave a device.”

This story was updated on February 9 to clarify that government IDs are checked off-device.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Discord faces backlash over age checks after data breach exposed 70,000 IDs Read More »

australia’s-social-media-ban-is-“problematic,”-but-platforms-will-comply-anyway

Australia’s social media ban is “problematic,” but platforms will comply anyway

Social media platforms have agreed to comply with Australia’s social media ban for users under 16 years old, begrudgingly embracing the world’s most restrictive online child safety law.

On Tuesday, Meta, Snap, and TikTok confirmed to Australia’s parliament that they’ll start removing and deactivating more than a million underage accounts when the law’s enforcement begins on December 10, Reuters reported.

Firms risk fines of up to $32.5 million for failing to block underage users.

Age checks are expected to be spotty, however, and Australia is still “scrambling” to figure out “key issues around enforcement,” including detailing firms’ precise obligations, AFP reported.

An FAQ managed by Australia’s eSafety regulator noted that platforms will be expected to find the accounts of all users under 16.

Those users must be allowed to download their data easily before their account is removed.

Some platforms can otherwise allow users to simply deactivate and retain their data until they reach age 17. Meta and TikTok expect to go that route, but Australia’s regulator warned that “users should not rely on platforms to provide this option.”

Additionally, platforms must prepare to catch kids who skirt age gates, the regulator said, and must block anyone under 16 from opening a new account. Beyond that, they’re expected to prevent “workarounds” to “bypass restrictions,” such as kids using AI to fake IDs, deepfakes to trick face scans, or the use of virtual private networks (VPNs) to alter their location to basically anywhere else in the world with less restrictive child safety policies.

Kids discovered inappropriately accessing social media should be easy to report, too, Australia’s regulator said.

Australia’s social media ban is “problematic,” but platforms will comply anyway Read More »