An AI-generated nude photo scandal has shut down a Pennsylvania private school. On Monday, classes were canceled after parents forced leaders to either resign or face a lawsuit potentially seeking criminal penalties and accusing the school of skipping mandatory reporting of the harmful images.
The outcry erupted after a single student created sexually explicit AI images of nearly 50 female classmates at Lancaster Country Day School, Lancaster Online reported.
Head of School Matt Micciche seemingly first learned of the problem in November 2023, when a student anonymously reported the explicit deepfakes through a school portal run by the state attorney’s general office called “Safe2Say Something.” But Micciche allegedly did nothing, allowing more students to be targeted for months until police were tipped off in mid-2024.
Cops arrested the student accused of creating the harmful content in August. The student’s phone was seized as cops investigated the origins of the AI-generated images. But that arrest was not enough justice for parents who were shocked by the school’s failure to uphold mandatory reporting responsibilities following any suspicion of child abuse. They filed a court summons threatening to sue last week unless the school leaders responsible for the mishandled response resigned within 48 hours.
This tactic successfully pushed Micciche and the school board’s president, Angela Ang-Alhadeff, to “part ways” with the school, both resigning effective late Friday, Lancaster Online reported.
In a statement announcing that classes were canceled Monday, Lancaster Country Day School—which, according to Wikipedia, serves about 600 students in pre-kindergarten through high school—offered support during this “difficult time” for the community.
Parents do not seem ready to drop the suit, as the school leaders seemingly dragged their feet and resigned two days after their deadline. The parents’ lawyer, Matthew Faranda-Diedrich, told Lancaster Online Monday that “the lawsuit would still be pursued despite executive changes.”
Major technology companies, including Google, Apple, and Discord, have been enabling people to quickly sign up to harmful “undress” websites, which use AI to remove clothes from real photos to make victims appear to be “nude” without their consent. More than a dozen of these deepfake websites have been using login buttons from the tech companies for months.
A WIRED analysis found 16 of the biggest so-called undress and “nudify” websites using the sign-in infrastructure from Google, Apple, Discord, Twitter, Patreon, and Line. This approach allows people to easily create accounts on the deepfake websites—offering them a veneer of credibility—before they pay for credits and generate images.
While bots and websites that create nonconsensual intimate images of women and girls have existed for years, the number has increased with the introduction of generative AI. This kind of “undress” abuse is alarminglywidespread, with teenage boys allegedly creatingimages of their classmates. Tech companies have been slow to deal with the scale of the issues, critics say, with the websites appearing highly in search results, paid advertisements promoting them on social media, and apps showing up in app stores.
“This is a continuation of a trend that normalizes sexual violence against women and girls by Big Tech,” says Adam Dodge, a lawyer and founder of EndTAB (Ending Technology-Enabled Abuse). “Sign-in APIs are tools of convenience. We should never be making sexual violence an act of convenience,” he says. “We should be putting up walls around the access to these apps, and instead we’re giving people a drawbridge.”
The sign-in tools analyzed by WIRED, which are deployed through APIs and common authentication methods, allow people to use existing accounts to join the deepfake websites. Google’s login system appeared on 16 websites, Discord’s appeared on 13, and Apple’s on six. X’s button was on three websites, with Patreon and messaging service Line’s both appearing on the same two websites.
WIRED is not naming the websites, since they enable abuse. Several are part of wider networks and owned by the same individuals or companies. The login systems have been used despite the tech companies broadlyhavingrules that state developers cannot use their services in ways that would enable harm, harassment, or invade people’s privacy.
After being contacted by WIRED, spokespeople for Discord and Apple said they have removed the developer accounts connected to their websites. Google said it will take action against developers when it finds its terms have been violated. Patreon said it prohibits accounts that allow explicit imagery to be created, and Line confirmed it is investigating but said it could not comment on specific websites. X did not reply to a request for comment about the way its systems are being used.
In the hours after Jud Hoffman, Discord vice president of trust and safety, told WIRED it had terminated the websites’ access to its APIs for violating its developer policy, one of the undress websites posted in a Telegram channel that authorization via Discord was “temporarily unavailable” and claimed it was trying to restore access. That undress service did not respond to WIRED’s request for comment about its operations.
Because of “a concerning increase in generated images and videos that portray people in sexually explicit contexts, distributed on the web without their consent,” Google said that it consulted with “experts and victim-survivors” to make some “significant updates” to its widely used search engine to “further protect people.”
Specifically, Google made it easier for targets of fake explicit images—which experts have said are overwhelmingly women—to report and remove deepfakes that surface in search results. Additionally, Google took steps to downrank explicit deepfakes “to keep this type of content from appearing high up in Search results,” the world’s leading search engine said.
Victims of deepfake pornography have previously criticized Google for not being more proactive in its fight against deepfakes in search results. Surfacing images and reporting each one is a “time- and energy-draining process” and “constant battle,” Kaitlyn Siragusa, a Twitch gamer with an explicit OnlyFans frequently targeted by deepfakes, told Bloomberg last year.
In response, Google has worked to “make the process easier,” partly by “helping people address this issue at scale.” Now, when a victim submits a removal request, “Google’s systems will also aim to filter all explicit results on similar searches about them,” Google’s blog said. And once a deepfake is “successfully removed,” Google “will scan for—and remove—any duplicates of that image that we find,” the blog said.
Google’s efforts to downrank harmful fake content have also expanded, the tech giant said. To help individuals targeted by deepfakes, Google will now “lower explicit fake content for” searches that include people’s names. According to Google, this step alone has “reduced exposure to explicit image results on these types of queries by over 70 percent.”
However, Google still seems resistant to downranking general searches that might lead people to harmful content. A quick Google search confirms that general searches with keywords like “celebrity nude deepfake” point searchers to popular destinations where they can search for non-consensual intimate images of celebrities or request images of less famous people.
For victims, the bottom line is that problematic links will still appear in Google’s search results for anyone willing to keep scrolling or anyone intentionally searching for “deepfakes.” The only step Google has taken recently to downrank top deepfake sites like Fan-Topia or MrDeepFakes is a promise to demote “sites that have received a high volume of removals for fake explicit imagery.”
It’s currently unclear what Google considers a “high volume,” and Google declined Ars’ request to comment on whether these sites would be downranked eventually. Instead, a Google spokesperson told Ars that “if we receive a high volume of successful removal sites from a specific website under this policy, we will use that as a ranking signal and demote the site in question for queries where the site might surface.”
Currently, Google’s spokesperson said, Google is focused on downranking “queries that include the names of individuals,” which “have the highest potential for individual harm.” But more queries will be downranked in the coming months, Google’s spokesperson said, and Google continues to tackle the “technical challenge for search engines” of differentiating between “explicit content that’s real and consensual (like an actor’s nude scenes)” and “explicit fake content (like deepfakes featuring said actor),” Google’s blog said.
“This is an ongoing effort, and we have additional improvements coming over the next few months to address a broader range of queries,” Google’s spokesperson told Ars.
Deepfake trauma “never ends”
In its blog, Google said that “these efforts are designed to give people added peace of mind, especially if they’re concerned about similar content about them popping up in the future.”
But many deepfake victims have claimed that dedicating hours or even months to removing harmful content doesn’t provide any hope that the images won’t resurface. Most recently, one deepfake victim, Sabrina Javellana, told The New York Times that even after her home state Florida passed a law against deepfakes, that didn’t stop the fake images from spreading online.
She’s given up on trying to get the images removed anywhere, telling The Times, “It just never ends. I just have to accept it.”
According to US Representative Joseph Morelle (D-NY), it will take a federal law against deepfakes to deter more bad actors from harassing and terrorizing women with deepfake porn. He’s introduced one such law, the Preventing Deepfakes of Intimate Images Act, which would criminalize creating deepfakes. It currently has 59 sponsors in the House and bipartisan support in the Senate, Morelle said on a panel this week discussing harms of deepfakes, which Ars attended.
Morelle said he’d spoken to victims of deepfakes, including teenagers, and decided that “a national ban and a national set of both criminal and civil remedies makes the most sense” to combat the problem with “urgency.”
“A patchwork of different state and local jurisdictions with different rules” would be “really hard to follow” for both victims and perpetrators trying to understand what’s legal, Morelle said, whereas federal laws that impose a liability and criminal penalty would likely have “the greatest impact.”
Victims, Morelle said, every day suffer from mental, physical, emotional, and financial harms, and as a co-panelist, Andrea Powell, pointed out, there is no healing because there is currently no justice for survivors during a period of “prolific and catastrophic increase in this abuse,” Powell warned.
A Spanish youth court has sentenced 15 minors to one year of probation after spreading AI-generated nude images of female classmates in two WhatsApp groups.
The minors were charged with 20 counts of creating child sex abuse images and 20 counts of offenses against their victims’ moral integrity. In addition to probation, the teens will also be required to attend classes on gender and equality, as well as on the “responsible use of information and communication technologies,” a press release from the Juvenile Court of Badajoz said.
Many of the victims were too ashamed to speak up when the inappropriate fake images began spreading last year. Prior to the sentencing, a mother of one of the victims told The Guardian that girls like her daughter “were completely terrified and had tremendous anxiety attacks because they were suffering this in silence.”
The court confirmed that the teens used artificial intelligence to create images where female classmates “appear naked” by swiping photos from their social media profiles and superimposing their faces on “other naked female bodies.”
Teens using AI to sexualize and harass classmates has become an alarming global trend. Police have probed disturbing cases in both high schools and middle schools in the US, and earlier this year, the European Union proposed expanding its definition of child sex abuse to more effectively “prosecute the production and dissemination of deepfakes and AI-generated material.” Last year, US President Joe Biden issued an executive order urging lawmakers to pass more protections.
In addition to mental health impacts, victims have reported losing trust in classmates who targeted them and wanting to switch schools to avoid further contact with harassers. Others stopped posting photos online and remained fearful that the harmful AI images will resurface.
Minors targeting classmates may not realize exactly how far images can potentially spread when generating fake child sex abuse materials (CSAM); they could even end up on the dark web. An investigation by the United Kingdom-based Internet Watch Foundation (IWF) last year reported that “20,254 AI-generated images were found to have been posted to one dark web CSAM forum in a one-month period,” with more than half determined most likely to be criminal.
IWF warned that it has identified a growing market for AI-generated CSAM and concluded that “most AI CSAM found is now realistic enough to be treated as ‘real’ CSAM.” One “shocked” mother of a female classmate victimized in Spain agreed. She told The Guardian that “if I didn’t know my daughter’s body, I would have thought that image was real.”
More drastic steps to stop deepfakes
While lawmakers struggle to apply existing protections against CSAM to AI-generated images or to update laws to explicitly prosecute the offense, other more drastic solutions to prevent the harmful spread of deepfakes have been proposed.
In an op-ed for The Guardian today, journalist Lucia Osborne-Crowley advocated for laws restricting sites used to both generate and surface deepfake pornography, including regulating this harmful content when it appears on social media sites and search engines. And IWF suggested that, like jurisdictions that restrict sharing bomb-making information, lawmakers could also restrict guides instructing bad actors on how to use AI to generate CSAM.
The Malvaluna Association, which represented families of victims in Spain and broadly advocates for better sex education, told El Diario that beyond more regulations, more education is needed to stop teens motivated to use AI to attack classmates. Because the teens were ordered to attend classes, the association agreed to the sentencing measures.
“Beyond this particular trial, these facts should make us reflect on the need to educate people about equality between men and women,” the Malvaluna Association said. The group urged that today’s kids should not be learning about sex through pornography that “generates more sexism and violence.”
Teens sentenced in Spain were between the ages of 13 and 15. According to the Guardian, Spanish law prevented sentencing of minors under 14, but the youth court “can force them to take part in rehabilitation courses.”
Tech companies could also make it easier to report and remove harmful deepfakes. Ars could not immediately reach Meta for comment on efforts to combat the proliferation of AI-generated CSAM on WhatsApp, the private messaging app that was used to share fake images in Spain.
An FAQ said that “WhatsApp has zero tolerance for child sexual exploitation and abuse, and we ban users when we become aware they are sharing content that exploits or endangers children,” but it does not mention AI.
Human Rights Watch (HRW) continues to reveal how photos of real children casually posted online years ago are being used to train AI models powering image generators—even when platforms prohibit scraping and families use strict privacy settings.
Last month, HRW researcher Hye Jung Han found 170 photos of Brazilian kids that were linked in LAION-5B, a popular AI dataset built from Common Crawl snapshots of the public web. Now, she has released a second report, flagging 190 photos of children from all of Australia’s states and territories, including indigenous children who may be particularly vulnerable to harms.
These photos are linked in the dataset “without the knowledge or consent of the children or their families.” They span the entirety of childhood, making it possible for AI image generators to generate realistic deepfakes of real Australian children, Han’s report said. Perhaps even more concerning, the URLs in the dataset sometimes reveal identifying information about children, including their names and locations where photos were shot, making it easy to track down children whose images might not otherwise be discoverable online.
That puts children in danger of privacy and safety risks, Han said, and some parents thinking they’ve protected their kids’ privacy online may not realize that these risks exist.
From a single link to one photo that showed “two boys, ages 3 and 4, grinning from ear to ear as they hold paintbrushes in front of a colorful mural,” Han could trace “both children’s full names and ages, and the name of the preschool they attend in Perth, in Western Australia.” And perhaps most disturbingly, “information about these children does not appear to exist anywhere else on the Internet”—suggesting that families were particularly cautious in shielding these boys’ identities online.
Stricter privacy settings were used in another image that Han found linked in the dataset. The photo showed “a close-up of two boys making funny faces, captured from a video posted on YouTube of teenagers celebrating” during the week after their final exams, Han reported. Whoever posted that YouTube video adjusted privacy settings so that it would be “unlisted” and would not appear in searches.
Only someone with a link to the video was supposed to have access, but that didn’t stop Common Crawl from archiving the image, nor did YouTube policies prohibiting AI scraping or harvesting of identifying information.
Reached for comment, YouTube’s spokesperson, Jack Malon, told Ars that YouTube has “been clear that the unauthorized scraping of YouTube content is a violation of our Terms of Service, and we continue to take action against this type of abuse.” But Han worries that even if YouTube did join efforts to remove images of children from the dataset, the damage has been done, since AI tools have already trained on them. That’s why—even more than parents need tech companies to up their game blocking AI training—kids need regulators to intervene and stop training before it happens, Han’s report said.
Han’s report comes a month before Australia is expected to release a reformed draft of the country’s Privacy Act. Those reforms include a draft of Australia’s first child data protection law, known as the Children’s Online Privacy Code, but Han told Ars that even people involved in long-running discussions about reforms aren’t “actually sure how much the government is going to announce in August.”
“Children in Australia are waiting with bated breath to see if the government will adopt protections for them,” Han said, emphasizing in her report that “children should not have to live in fear that their photos might be stolen and weaponized against them.”
AI uniquely harms Australian kids
To hunt down the photos of Australian kids, Han “reviewed fewer than 0.0001 percent of the 5.85 billion images and captions contained in the data set.” Because her sample was so small, Han expects that her findings represent a significant undercount of how many children could be impacted by the AI scraping.
“It’s astonishing that out of a random sample size of about 5,000 photos, I immediately fell into 190 photos of Australian children,” Han told Ars. “You would expect that there would be more photos of cats than there are personal photos of children,” since LAION-5B is a “reflection of the entire Internet.”
LAION is working with HRW to remove links to all the images flagged, but cleaning up the dataset does not seem to be a fast process. Han told Ars that based on her most recent exchange with the German nonprofit, LAION had not yet removed links to photos of Brazilian kids that she reported a month ago.
LAION declined Ars’ request for comment.
In June, LAION’s spokesperson, Nathan Tyler, told Ars that, “as a nonprofit, volunteer organization,” LAION is committed to doing its part to help with the “larger and very concerning issue” of misuse of children’s data online. But removing links from the LAION-5B dataset does not remove the images online, Tyler noted, where they can still be referenced and used in other AI datasets, particularly those relying on Common Crawl. And Han pointed out that removing the links from the dataset doesn’t change AI models that have already trained on them.
“Current AI models cannot forget data they were trained on, even if the data was later removed from the training data set,” Han’s report said.
Kids whose images are used to train AI models are exposed to a variety of harms, Han reported, including a risk that image generators could more convincingly create harmful or explicit deepfakes. In Australia last month, “about 50 girls from Melbourne reported that photos from their social media profiles were taken and manipulated using AI to create sexually explicit deepfakes of them, which were then circulated online,” Han reported.
For First Nations children—”including those identified in captions as being from the Anangu, Arrernte, Pitjantjatjara, Pintupi, Tiwi, and Warlpiri peoples”—the inclusion of links to photos threatens unique harms. Because culturally, First Nations peoples “restrict the reproduction of photos of deceased people during periods of mourning,” Han said the AI training could perpetuate harms by making it harder to control when images are reproduced.
Once an AI model trains on the images, there are other obvious privacy risks, including a concern that AI models are “notorious for leaking private information,” Han said. Guardrails added to image generators do not always prevent these leaks, with some tools “repeatedly broken,” Han reported.
LAION recommends that, if troubled by the privacy risks, parents remove images of kids online as the most effective way to prevent abuse. But Han told Ars that’s “not just unrealistic, but frankly, outrageous.”
“The answer is not to call for children and parents to remove wonderful photos of kids online,” Han said. “The call should be [for] some sort of legal protections for these photos, so that kids don’t have to always wonder if their selfie is going to be abused.”
Photos of Brazilian kids—sometimes spanning their entire childhood—have been used without their consent to power AI tools, including popular image generators like Stable Diffusion, Human Rights Watch (HRW) warned on Monday.
This act poses urgent privacy risks to kids and seems to increase risks of non-consensual AI-generated images bearing their likenesses, HRW’s report said.
An HRW researcher, Hye Jung Han, helped expose the problem. She analyzed “less than 0.0001 percent” of LAION-5B, a dataset built from Common Crawl snapshots of the public web. The dataset does not contain the actual photos but includes image-text pairs derived from 5.85 billion images and captions posted online since 2008.
Among those images linked in the dataset, Han found 170 photos of children from at least 10 Brazilian states. These were mostly family photos uploaded to personal and parenting blogs most Internet surfers wouldn’t easily stumble upon, “as well as stills from YouTube videos with small view counts, seemingly uploaded to be shared with family and friends,” Wired reported.
LAION, the German nonprofit that created the dataset, has worked with HRW to remove the links to the children’s images in the dataset.
That may not completely resolve the problem, though. HRW’s report warned that the removed links are “likely to be a significant undercount of the total amount of children’s personal data that exists in LAION-5B.” Han told Wired that she fears that the dataset may still be referencing personal photos of kids “from all over the world.”
Removing the links also does not remove the images from the public web, where they can still be referenced and used in other AI datasets, particularly those relying on Common Crawl, LAION’s spokesperson, Nate Tyler, told Ars.
“This is a larger and very concerning issue, and as a nonprofit, volunteer organization, we will do our part to help,” Tyler told Ars.
Han told Ars that “Common Crawl should stop scraping children’s personal data, given the privacy risks involved and the potential for new forms of misuse.”
According to HRW’s analysis, many of the Brazilian children’s identities were “easily traceable,” due to children’s names and locations being included in image captions that were processed when building the LAION dataset.
And at a time when middle and high school-aged students are at greater risk of being targeted by bullies or bad actors turning “innocuous photos” into explicit imagery, it’s possible that AI tools may be better equipped to generate AI clones of kids whose images are referenced in AI datasets, HRW suggested.
“The photos reviewed span the entirety of childhood,” HRW’s report said. “They capture intimate moments of babies being born into the gloved hands of doctors, young children blowing out candles on their birthday cake or dancing in their underwear at home, students giving a presentation at school, and teenagers posing for photos at their high school’s carnival.”
There is less risk that the Brazilian kids’ photos are currently powering AI tools since “all publicly available versions of LAION-5B were taken down” in December, Tyler told Ars. That decision came out of an “abundance of caution” after a Stanford University report “found links in the dataset pointing to illegal content on the public web,” Tyler said, including 3,226 suspected instances of child sexual abuse material.
Han told Ars that “the version of the dataset that we examined pre-dates LAION’s temporary removal of its dataset in December 2023.” The dataset will not be available again until LAION determines that all flagged illegal content has been removed.
“LAION is currently working with the Internet Watch Foundation, the Canadian Centre for Child Protection, Stanford, and Human Rights Watch to remove all known references to illegal content from LAION-5B,” Tyler told Ars. “We are grateful for their support and hope to republish a revised LAION-5B soon.”
In Brazil, “at least 85 girls” have reported classmates harassing them by using AI tools to “create sexually explicit deepfakes of the girls based on photos taken from their social media profiles,” HRW reported. Once these explicit deepfakes are posted online, they can inflict “lasting harm,” HRW warned, potentially remaining online for their entire lives.
“Children should not have to live in fear that their photos might be stolen and weaponized against them,” Han said. “The government should urgently adopt policies to protect children’s data from AI-fueled misuse.”
Ars could not immediately reach Stable Diffusion maker Stability AI for comment.