child safety

pornhub-prepares-to-block-five-more-states-rather-than-check-ids

Pornhub prepares to block five more states rather than check IDs

“Uphill battle” —

The number of states blocked by Pornhub will soon nearly double.

Pornhub prepares to block five more states rather than check IDs

Aurich Lawson | Getty Images

Pornhub will soon be blocked in five more states as the adult site continues to fight what it considers privacy-infringing age-verification laws that require Internet users to provide an ID to access pornography.

On July 1, according to a blog post on the adult site announcing the impending block, Pornhub visitors in Indiana, Idaho, Kansas, Kentucky, and Nebraska will be “greeted by a video featuring” adult entertainer Cherie Deville, “who explains why we had to make the difficult decision to block them from accessing Pornhub.”

Pornhub explained that—similar to blocks in Texas, Utah, Arkansas, Virginia, Montana, North Carolina, and Mississippi—the site refuses to comply with soon-to-be-enforceable age-verification laws in this new batch of states that allegedly put users at “substantial risk” of identity theft, phishing, and other harms.

Age-verification laws requiring adult site visitors to submit “private information many times to adult sites all over the Internet” normalizes the unnecessary disclosure of personally identifiable information (PII), Pornhub argued, warning, “this is not a privacy-by-design approach.”

Pornhub does not outright oppose age verification but advocates for laws that require device-based age verification, which allows users to access adult sites after authenticating their identity on their devices. That’s “the best and most effective solution for protecting minors and adults alike,” Pornhub argued, because the age-verification technology is proven and less PII would be shared.

“Users would only get verified once, through their operating system, not on each age-restricted site,” Pornhub’s blog said, claiming that “this dramatically reduces privacy risks and creates a very simple process for regulators to enforce.”

A spokesperson for Pornhub-owner Aylo told Ars that “unfortunately, the way many jurisdictions worldwide have chosen to implement age verification is ineffective, haphazard, and dangerous.”

“Any regulations that require hundreds of thousands of adult sites to collect significant amounts of highly sensitive personal information is putting user safety in jeopardy,” Aylo’s spokesperson told Ars. “Moreover, as experience has demonstrated, unless properly enforced, users will simply access non-compliant sites or find other methods of evading these laws.

Age-verification laws are harmful, Pornhub says

Pornhub’s big complaint with current age-verification laws is that these laws are hard to enforce and seem to make it riskier than ever to visit an adult site.

“Since age verification software requires users to hand over extremely sensitive information, it opens the door for the risk of data breaches,” Pornhub’s blog said. “Whether or not your intentions are good, governments have historically struggled to secure this data. It also creates an opportunity for criminals to exploit and extort people through phishing attempts or fake [age verification] processes, an unfortunate and all too common practice.”

Over the past few years, the risk of identity theft or stolen PII on both widely used and smaller niche adult sites has been well-documented.

Hundreds of millions of people were impacted by major leaks exposing PII shared with popular adult sites like Adult Friend Finder and Brazzers in 2016, while likely tens of thousands of users were targeted on eight poorly secured adult sites in 2018. Niche and free sites have also been vulnerable to attacks, including millions collectively exposed through breaches of fetish porn site Luscious in 2019 and MyFreeCams in 2021.

And those are just the big breaches that make headlines. In 2019, Kaspersky Lab reported that malware targeting online porn account credentials more than doubled in 2018, and researchers analyzing 22,484 pornography websites estimated that 93 percent were leaking user data to a third party.

That’s why Pornhub argues that, as states have passed age-verification laws requiring ID, they’ve “introduced harm” by redirecting visitors to adult sites that have fewer privacy protections and worse security, allegedly exposing users to more threats.

As an example, Pornhub reported, traffic to Pornhub in Louisiana “dropped by approximately 80 percent” after their age-verification law passed. That allegedly showed not just how few users were willing to show an ID to access their popular platform, but also how “very easily” users could simply move to “pirate, illegal, or other non-compliant sites that don’t ask visitors to verify their age.”

Pornhub has continued to argue that states passing laws like Louisiana’s cannot effectively enforce the laws and are simply shifting users to make riskier choices when accessing porn.

“The Louisiana law and other copycat state-level laws have no regulator, only civil liability, which results in a flawed enforcement regime, effectively making it an option for platform operators to comply,” Pornhub’s blog said. As one of the world’s most popular adult platforms, Pornhub would surely be targeted for enforcement if found to be non-compliant, while smaller adult sites perhaps plagued by security risks and disincentivized to check IDs would go unregulated, the thinking goes.

Aylo’s spokesperson shared 2023 Similarweb data with Ars, showing that sites complying with age-verification laws in Virginia, including Pornhub and xHamster, lost substantial traffic while seven non-compliant sites saw a sharp uptick in traffic. Similar trends were observed in Google trends data in Utah and Mississippi, while market shares were seemingly largely maintained in California, a state not yet checking IDs to access adult sites.

Pornhub prepares to block five more states rather than check IDs Read More »

us-woman-arrested,-accused-of-targeting-young-boys-in-$1.7m-sextortion-scheme

US woman arrested, accused of targeting young boys in $1.7M sextortion scheme

Preventing leaks —

FBI has warned of significant spike in teen sextortion in 2024.

US woman arrested, accused of targeting young boys in $1.7M sextortion scheme

A 28-year-old Delaware woman, Hadja Kone, was arrested after cops linked her to an international sextortion scheme targeting thousands of victims—mostly young men and including some minors, the US Department of Justice announced Friday.

Citing a recently unsealed indictment, the DOJ alleged that Kone and co-conspirators “operated an international, financially motivated sextortion and money laundering scheme in which the conspirators engaged in cyberstalking, interstate threats, money laundering, and wire fraud.”

Through the scheme, conspirators allegedly sought to extort about $6 million from “thousands of potential victims,” the DOJ said, and ultimately successfully extorted approximately $1.7 million.

Young men from the United States, Canada, and the United Kingdom fell for the scheme, the DOJ said. They were allegedly targeted by scammers posing as “young, attractive females online,” who initiated conversations by offering to send sexual photographs or video recordings, then invited victims to “web cam” or “live video chat” sessions.

“Unbeknownst to the victims, during the web cam/live video chats,” the DOJ said, the scammers would “surreptitiously” record the victims “as they exposed their genitals and/or engaged in sexual activity.” The scammers then threatened to publish the footage online or else share the footage with “the victims’ friends, family members, significant others, employers, and co-workers,” unless payments were sent, usually via Cash App or Apple Pay.

Much of these funds were allegedly transferred overseas to Kone’s accused co-conspirators, including 22-year-old Siaka Ouattara of the West African country the Ivory Coast. Ouattara was arrested by Ivorian authorities in February, the DOJ said.

“If convicted, Kone and Ouattara each face a maximum penalty of 20 years in prison for each conspiracy count and money laundering count, and a maximum penalty of 20 years in prison for each wire fraud count,” the DOJ said.

The FBI has said that it has been cracking down on sextortion after “a huge increase in the number of cases involving children and teens being threatened and coerced into sending explicit images online.” In 2024, the FBI announced a string of arrests, but none of the schemes so far have been as vast or far-reaching as the scheme that Kone allegedly helped operate.

In January, the FBI issued a warning about the “growing threat” to minors, warning parents that victims are “typically males between the ages of 14 to 17, but any child can become a victim.” Young victims are at risk of self-harm or suicide, the FBI said.

“From October 2021 to March 2023, the FBI and Homeland Security Investigations received over 13,000 reports of online financial sextortion of minors,” the FBI’s announcement said. “The sextortion involved at least 12,600 victims—primarily boys—and led to at least 20 suicides.”

For years, reports have shown that payment apps have been used in sextortion schemes with seemingly little intervention. When it comes to protecting minors, sextortion protections seem sparse, as neither Apple Pay nor Cash App appear to have any specific policies to combat the issue. However, both apps only allow minors over 13 to create accounts with authorized adult supervisors.

Apple and Cash App did not immediately respond to Ars’ request to comment.

Instagram, Snapchat add sextortion protections

Some social media platforms are responding to the spike in sextortion targeting minors.

Last year, Snapchat released a report finding that nearly two-thirds of more than 6,000 teens and young adults in six countries said that “they or their friends have been targeted in online ‘sextortion’ schemes” across many popular social media platforms. As a result of that report and prior research, Snapchat began allowing users to report sextortion specifically.

“Under the reporting menu for ‘Nudity or sexual content,’ a Snapchatter’s first option is to click, ‘They leaked/are threatening to leak my nudes,'” the report said.

Additionally, the DOJ’s announcement of Kone’s arrest came one day after Instagram confirmed that it was “testing new features to help protect young people from sextortion and intimate image abuse, and to make it more difficult for potential scammers and criminals to find and interact with teens.”

One feature will by default blur out sexual images shared over direct message, which Instagram said would protect minors from “scammers who may send nude images to trick people into sending their own images in return.” Instagram will also provide safety tips to anyone receiving a sexual image over DM, “encouraging them to report any threats to share their private images and reminding them that they can say no to anything that makes them feel uncomfortable.”

Perhaps more impactful, Instagram claimed that it was “developing technology to help identify where accounts may potentially be engaging in sextortion scams, based on a range of signals that could indicate sextortion behavior.” Having better signals helps Instagram to make it “harder for potential sextortion accounts to message or interact with people,” the platform said, by hiding those requests. Instagram also by default blocks adults from messaging users under 16 in some countries and under 18 in others.

Instagram said that other tech companies have also started “sharing more signals about sextortion accounts” through Lantern, a program that Meta helped to found with the Tech Coalition to prevent child sexual exploitation. Snapchat also participates in the cross-platform research.

According to the special agent in charge of the FBI’s Norfolk field office, Brian Dugan, “one of the best lines of defense to stopping a crime like this is to educate our most vulnerable on common warning signs, as well as empowering them to come forward if they are ever victimized.”

Both Instagram and Snapchat said they were also increasing sextortion resources available to educate young users.

“We know that sextortion is a risk teens and adults face across a range of platforms, and have developed tools and resources to help combat it,” Snap’s spokesperson told Ars. “We have extra safeguards for teens to protect against unwanted contact, and don’t offer public friend lists, which we know can be used to extort people. We also want to help young people learn the signs of this type of crime, and recently launched in-app resources to raise awareness of how to spot and report it.”

US woman arrested, accused of targeting young boys in $1.7M sextortion scheme Read More »

eu-accuses-tiktok-of-failing-to-stop-kids-pretending-to-be-adults

EU accuses TikTok of failing to stop kids pretending to be adults

Getting TikTok’s priorities straight —

TikTok becomes the second platform suspected of Digital Services Act breaches.

EU accuses TikTok of failing to stop kids pretending to be adults

The European Commission (EC) is concerned that TikTok isn’t doing enough to protect kids, alleging that the short-video app may be sending kids down rabbit holes of harmful content while making it easy for kids to pretend to be adults and avoid the protective content filters that do exist.

The allegations came Monday when the EC announced a formal investigation into how TikTok may be breaching the Digital Services Act (DSA) “in areas linked to the protection of minors, advertising transparency, data access for researchers, as well as the risk management of addictive design and harmful content.”

“We must spare no effort to protect our children,” Thierry Breton, European Commissioner for Internal Market, said in the press release, reiterating that the “protection of minors is a top enforcement priority for the DSA.”

This makes TikTok the second platform investigated for possible DSA breaches after X (aka Twitter) came under fire last December. Both are being scrutinized after submitting transparency reports in September that the EC said failed to satisfy the DSA’s strict standards on predictable things like not providing enough advertising transparency or data access for researchers.

But while X is additionally being investigated over alleged dark patterns and disinformation—following accusations last October that X wasn’t stopping the spread of Israel/Hamas disinformation—it’s TikTok’s young user base that appears to be the focus of the EC’s probe into its platform.

“As a platform that reaches millions of children and teenagers, TikTok must fully comply with the DSA and has a particular role to play in the protection of minors online,” Breton said. “We are launching this formal infringement proceeding today to ensure that proportionate action is taken to protect the physical and emotional well-being of young Europeans.”

Likely over the coming months, the EC will request more information from TikTok, picking apart its DSA transparency report. The probe could require interviews with TikTok staff or inspections of TikTok’s offices.

Upon concluding its investigation, the EC could require TikTok to take interim measures to fix any issues that are flagged. The Commission could also make a decision regarding non-compliance, potentially subjecting TikTok to fines of up to 6 percent of its global turnover.

An EC press officer, Thomas Regnier, told Ars that the Commission suspected that TikTok “has not diligently conducted” risk assessments to properly maintain mitigation efforts protecting “the physical and mental well-being of their users, and the rights of the child.”

In particular, its algorithm may risk “stimulating addictive behavior,” and its recommender systems “might drag its users, in particular minors and vulnerable users, into a so-called ‘rabbit hole’ of repetitive harmful content,” Regnier told Ars. Further, TikTok’s age verification system may be subpar, with the EU alleging that TikTok perhaps “failed to diligently assess the risk of 13-17-year-olds pretending to be adults when accessing TikTok,” Regnier said.

To better protect TikTok’s young users, the EU’s investigation could force TikTok to update its age-verification system and overhaul its default privacy, safety, and security settings for minors.

“In particular, the Commission suspects that the default settings of TikTok’s recommender systems do not ensure a high level of privacy, security, and safety of minors,” Regnier said. “The Commission also suspects that the default privacy settings that TikTok has for 16-17-year-olds are not the highest by default, which would not be compliant with the DSA, and that push notifications are, by default, not switched off for minors, which could negatively impact children’s safety.”

TikTok could avoid steep fines by committing to remedies recommended by the EC at the conclusion of its investigation.

Regnier told Ars that the EC does not comment on ongoing investigations, but its probe into X has spanned three months so far. Because the DSA does not provide any deadlines that may speed up these kinds of enforcement proceedings, ultimately, the duration of both investigations will depend on how much “the company concerned cooperates,” the EU’s press release said.

A TikTok spokesperson told Ars that TikTok “would continue to work with experts and the industry to keep young people on its platform safe,” confirming that the company “looked forward to explaining this work in detail to the European Commission.”

“TikTok has pioneered features and settings to protect teens and keep under-13s off the platform, issues the whole industry is grappling with,” TikTok’s spokesperson said.

All online platforms are now required to comply with the DSA, but enforcement on TikTok began near the end of July 2023. A TikTok press release last August promised that the platform would be “embracing” the DSA. But in its transparency report, submitted the next month, TikTok acknowledged that the report only covered “one month of metrics” and may not satisfy DSA standards.

“We still have more work to do,” TikTok’s report said, promising that “we are working hard to address these points ahead of our next DSA transparency report.”

EU accuses TikTok of failing to stop kids pretending to be adults Read More »

tiktok-requires-users-to-“forever-waive”-rights-to-sue-over-past-harms

TikTok requires users to “forever waive” rights to sue over past harms

Or forever hold your peace —

TikTok may be seeking to avoid increasingly high costs of mass arbitration.

TikTok requires users to “forever waive” rights to sue over past harms

Some TikTok users may have skipped reviewing an update to TikTok’s terms of service this summer that shakes up the process for filing a legal dispute against the app. According to The New York Times, changes that TikTok “quietly” made to its terms suggest that the popular app has spent the back half of 2023 preparing for a wave of legal battles.

In July, TikTok overhauled its rules for dispute resolution, pivoting from requiring private arbitration to insisting that legal complaints be filed in either the US District Court for the Central District of California or the Superior Court of the State of California, County of Los Angeles. Legal experts told the Times this could be a way for TikTok to dodge arbitration claims filed en masse that can cost companies millions more in fees than they expected to pay through individual arbitration.

Perhaps most significantly, TikTok also added a section to its terms that mandates that all legal complaints be filed within one year of any alleged harm caused by using the app. The terms now say that TikTok users “forever waive” rights to pursue any older claims. And unlike a prior version of TikTok’s terms of service archived in May 2023, users do not seem to have any options to opt out of waiving their rights.

TikTok did not immediately respond to Ars’ request to comment, but has previously defended its “industry-leading safeguards for young people,” the Times noted.

Lawyers told the Times that these changes could make it more challenging for TikTok users to pursue legal action at a time when federal agencies are heavily scrutinizing the app and complaints about certain TikTok features allegedly harming kids are mounting.

In the past few years, TikTok has had mixed success defending against user lawsuits filed in courts. In 2021, TikTok was dealt a $92 million blow after settling a class-action lawsuit filed in an Illinois court, which alleged that the app illegally collected underage TikTok users’ personal data. Then, in 2022, TikTok defeated a Pennsylvania lawsuit alleging that the app was liable for a child’s death because its algorithm promoted a deadly “Blackout Challenge.” The same year, a bipartisan coalition of 44 state attorneys general announced an investigation to determine whether TikTok violated consumer laws by allegedly putting young users at risk.

Section 230 shielded TikTok from liability in the 2022 “Blackout Challenge” lawsuit, but more recently, a California judge ruled last month that social media platforms—including TikTok, Facebook, Instagram, and YouTube—couldn’t use a blanket Section 230 defense in a child safety case involving hundreds of children and teens allegedly harmed by social media use across 30 states.

Some of the product liability claims raised in that case are tied to features not protected by Section 230 immunity, the judge wrote, opening up social media platforms to potentially more lawsuits focused on those features. And the Times reported that investigations like the one launched by the bipartisan coalition “can lead to government and consumer lawsuits.”

As new information becomes available to consumers through investigations and lawsuits, there are concerns that users may become aware of harms that occurred before TikTok’s one-year window to file complaints and have no path to seek remedies.

However, it’s currently unclear if TikTok’s new terms will stand up against legal challenges. University of Chicago law professor Omri Ben-Shahar told the Times that TikTok might struggle to defend its new terms in court, and it looks like TikTok is already facing pushback. One lawyer representing more than 1,000 guardians and minors claiming TikTok-related harms, Kyle Roche, told the Times that he is challenging TikTok’s updated terms. Roche said that the minors he represents “could not agree to the changes” and intended to ignore the updates, instead bringing their claims through private arbitration.

TikTok has also spent the past year defending against attempts by lawmakers to ban the China-based app in the US over concerns that the Chinese Communist Party (CCP) may use the app to surveil Americans. Congress has weighed different bipartisan bills with names like “ANTI-SOCIAL CCP Act” and “RESTRICT Act,” each intent to lay out a legal path to ban TikTok nationwide over alleged national security concerns.

So far, TikTok has defeated every attempt to widely ban the app, but that doesn’t mean lawmakers have any plans to stop trying. Most recently, a federal judge stopped Montana’s effort to ban TikTok statewide from taking effect, but a more limited TikTok ban restricting access on state-owned devices was upheld in Texas, Reuters reported.

TikTok requires users to “forever waive” rights to sue over past harms Read More »