Policy

cops-lure-pedophiles-with-ai-pics-of-teen-girl.-ethical-triumph-or-new-disaster?

Cops lure pedophiles with AI pics of teen girl. Ethical triumph or new disaster?

Who is she? —

New Mexico sued Snapchat after using AI to reveal child safety risks.

Cops lure pedophiles with AI pics of teen girl. Ethical triumph or new disaster?

Aurich Lawson | Getty Images

Cops are now using AI to generate images of fake kids, which are helping them catch child predators online, a lawsuit filed by the state of New Mexico against Snapchat revealed this week.

According to the complaint, the New Mexico Department of Justice launched an undercover investigation in recent months to prove that Snapchat “is a primary social media platform for sharing child sexual abuse material (CSAM)” and sextortion of minors, because its “algorithm serves up children to adult predators.”

As part of their probe, an investigator “set up a decoy account for a 14-year-old girl, Sexy14Heather.”

  • An AI-generated image of “Sexy14Heather” included in the New Mexico complaint.

  • An image of a Snapchat avatar for “Sexy14Heather” included in the New Mexico complaint.

Despite Snapchat setting the fake minor’s profile to private and the account not adding any followers, “Heather” was soon recommended widely to “dangerous accounts, including ones named ‘child.rape’ and ‘pedo_lover10,’ in addition to others that are even more explicit,” the New Mexico DOJ said in a press release.

And after “Heather” accepted a follow request from just one account, the recommendations got even worse. “Snapchat suggested over 91 users, including numerous adult users whose accounts included or sought to exchange sexually explicit content,” New Mexico’s complaint alleged.

“Snapchat is a breeding ground for predators to collect sexually explicit images of children and to find, groom, and extort them,” New Mexico’s complaint alleged.

Posing as “Sexy14Heather,” the investigator swapped messages with adult accounts, including users who “sent inappropriate messages and explicit photos.” In one exchange with a user named “50+ SNGL DAD 4 YNGR,” the fake teen “noted her age, sent a photo, and complained about her parents making her go to school,” prompting the user to send “his own photo” as well as sexually suggestive chats. Other accounts asked “Heather” to “trade presumably explicit content,” and several “attempted to coerce the underage persona into sharing CSAM,” the New Mexico DOJ said.

“Heather” also tested out Snapchat’s search tool, finding that “even though she used no sexually explicit language, the algorithm must have determined that she was looking for CSAM” when she searched for other teen users. It “began recommending users associated with trading” CSAM, including accounts with usernames such as “naughtypics,” “addfortrading,” “teentr3de,” “gayhorny13yox,” and “teentradevirgin,” the investigation found, “suggesting that these accounts also were involved in the dissemination of CSAM.”

This novel use of AI was prompted after Albuquerque police indicted a man, Alejandro Marquez, who pled guilty and was sentenced to 18 years for raping an 11-year-old girl he met through Snapchat’s Quick Add feature in 2022, New Mexico’s complaint said. More recently, the New Mexico complaint said, an Albuquerque man, Jeremy Guthrie, was arrested and sentenced this summer for “raping a 12-year-old girl who he met and cultivated over Snapchat.”

In the past, police have posed as kids online to catch child predators using photos of younger-looking adult women or even younger photos of police officers. Using AI-generated images could be considered a more ethical way to conduct these stings, a lawyer specializing in sex crimes, Carrie Goldberg, told Ars, because “an AI decoy profile is less problematic than using images of an actual child.”

But using AI could complicate investigations and carry its own ethical concerns, Goldberg warned, as child safety experts and law enforcement warn that the Internet is increasingly swamped with AI-generated CSAM.

“In terms of AI being used for entrapment, defendants can defend themselves if they say the government induced them to commit a crime that they were not already predisposed to commit,” Goldberg told Ars. “Of course, it would be ethically concerning if the government were to create deepfake AI child sexual abuse material (CSAM), because those images are illegal, and we don’t want more CSAM in circulation.”

Experts have warned that AI image generators should never be trained on datasets that combine images of real kids with explicit content to avoid any instances of AI-generated CSAM, which is particularly harmful when it appears to depict a real kid or an actual victim of child abuse.

In the New Mexico complaint, only one AI-generated image is included, so it’s unclear how widely the state’s DOJ is using AI or if cops are possibly using more advanced methods to generate multiple images of the same fake kid. It’s also unclear what ethical concerns were weighed before cops began using AI decoys.

The New Mexico DOJ did not respond to Ars’ request for comment.

Goldberg told Ars that “there ought to be standards within law enforcement with how to use AI responsibly,” warning that “we are likely to see more entrapment defenses centered around AI if the government is using the technology in a manipulative way to pressure somebody into committing a crime.”

Cops lure pedophiles with AI pics of teen girl. Ethical triumph or new disaster? Read More »

telegram-is-not-an-“anarchic-paradise,”-ceo-pavel-durov-says-after-arrest

Telegram is not an “anarchic paradise,” CEO Pavel Durov says after arrest

The Telegram app icon on a phone screen

Getty Images | picture alliance

Telegram CEO Pavel Durov, in his first public comments since being arrested by French authorities, said that Telegram is not an “anarchic paradise” but promised that the platform will enhance its moderation of harmful content.

While Telegram has room for improvement, “the claims in some media that Telegram is some sort of anarchic paradise are absolutely untrue,” Durov wrote on Telegram yesterday. “We take down millions of harmful posts and channels every day. We publish daily transparency reports (like this or this). We have direct hotlines with NGOs to process urgent moderation requests faster.”

The links Durov provided go to Telegram channels that report the number of groups and channels banned for terrorist content and child-abuse content. Telegram has been criticized by groups such as the National Center for Missing and Exploited Children (NCMEC) for allegedly not cooperating on removal of child sexual abuse material.

Durov said Telegram has heard criticism that its moderation efforts are “not enough,” adding that “Telegram’s abrupt increase in user count to 950M caused growing pains that made it easier for criminals to abuse our platform. That’s why I made it my personal goal to ensure we significantly improve things in this regard. We’ve already started that process internally, and I will share more details on our progress with you very soon.”

Durov is forbidden from leaving France after his indictment last week. Prosecutor Laure Beccuau alleged that law enforcement authorities received a near-total lack of response from Telegram to requests for cooperation in cases related to crimes against minors, drug crimes, and online hate.

FAQ signals new approach to “private” messages

Telegram already made a change to its FAQ in a section on how the company handles illegal content. The change suggests Telegram may do more moderation of private messages.

An Internet Archive capture of the FAQ page from yesterday contained the following text:

Q: There’s illegal content on Telegram. How do I take it down?

All Telegram chats and group chats are private amongst their participants. We do not process any requests related to them.

But sticker sets, channels, and bots on Telegram are publicly available. If you find sticker sets or bots on Telegram that you think are illegal, please ping us at [email protected].

You can also use the ‘report’ buttons right inside our apps, see this post on our official @ISISwatch channel for details.

That section in the current version of the FAQ page was heavily rewritten. The statement that all chats are private and that Telegram does not “process any requests related to them” has been removed. It now says, “All Telegram apps have ‘Report’ buttons that let you flag illegal content for our moderators—in just a few taps,” and goes on to provide more specific instructions on how to report illegal content in messages.

Some of the key language removed from the section on illegal content remains in the FAQ section on how to report copyright infringement. The copyright section still contains the statement that all chats are private and that Telegram does not “process any requests related to them.” Despite that, testing the app today showed that clicking “Report” on a Telegram message provides an option to report copyright infringement. Users can also report messages for spam, violence, pornography, child abuse, illegal drugs, or personal details.

Telegram messages do not have end-to-end encryption by default, but the security feature can be enabled for one-on-one conversations. The app has social-network features letting users create groups of up to 200,000 people and channels for posting of public messages to audiences of any size. Telegram users cannot enable end-to-end encryption on group messages.

Telegram is not an “anarchic paradise,” CEO Pavel Durov says after arrest Read More »

verizon-to-buy-frontier-for-$9.6-billion,-says-it-will-expand-fiber-network

Verizon to buy Frontier for $9.6 billion, says it will expand fiber network

Verizon/Frontier merger —

Verizon once sold part of its network to Frontier; now it’s buying the company.

A Verizon FiOS box truck on a street in New York City.

Enlarge / A Verizon FiOS truck in Manhattan on September 15, 2017.

Verizon today announced a deal to acquire Frontier Communications, an Internet service provider with about 3 million customers in 25 states. Verizon said the all-cash transaction is valued at $20 billion.

Verizon agreed to pay $9.6 billion and is taking on over $10 billion in debt held by Frontier. Verizon said the deal is subject to regulatory approval and a vote by Frontier shareholders and is expected to be completed in 18 months.

“Under the terms of the agreement, Verizon will acquire Frontier for $38.50 per share in cash, representing a premium of 43.7 percent to Frontier’s 90-Day volume-weighted average share price (VWAP) on September 3, 2024, the last trading day prior to media reports regarding a potential acquisition of Frontier,” Verizon said.

Assuming regulatory and shareholder approval, Verizon will be buying back a former portion of its network that it sold to Frontier eight years ago. In 2016, Frontier bought Verizon’s FiOS and DSL operations in Florida, California, and Texas. The 2016 changeover was marred by technical problems that caused weeks of outages for tens of thousands of customers.

Frontier, which had also purchased the Connecticut portion of AT&T’s network, struggled for many years and filed for bankruptcy in April 2020. It was criticized by regulators for not properly maintaining its copper phone network. Frontier emerged from bankruptcy in 2021 with a plan to upgrade many of its outdated copper DSL locations with fiber-to-the-home service.

“Frontier’s 2.2 million fiber subscribers across 25 states will join Verizon’s approximately 7.4 million FiOS connections in 9 states and Washington, D.C.,” Verizon said. “In addition to Frontier’s 7.2 million fiber locations, the company is committed to its plan to build out an additional 2.8 million fiber locations by the end of 2026.”

Combined, the Verizon and Frontier fiber networks pass over 25 million premises in 31 states and the District of Columbia, the companies said. Verizon and Frontier both “expect to increase their fiber penetration between now and closing,” they said.

Frontier “complementary” to Verizon’s Northeast market

Frontier has 2.05 million residential fiber customers and 721,000 residential copper DSL customers, according to an earnings report. In the business and wholesale category, Frontier has 134,000 fiber customers and 102,000 copper customers. Frontier reported $1.48 billion in revenue in Q2 2024 and a net loss of $123 million.

Verizon said Frontier’s recent investment in fiber made it a more attractive acquisition target. “Over approximately four years, Frontier has invested $4.1 billion upgrading and expanding its fiber network, and now derives more than 50 percent of its revenue from fiber products,” Verizon said.

Verizon FiOS is available in parts of Connecticut, Delaware, Maryland, Massachusetts, New York, New Jersey, Virginia, Rhode Island, Pennsylvania, and the District of Columbia. Verizon said Frontier’s footprint is “highly complementary to Verizon’s core Northeast and Mid-Atlantic markets,” and will help grow the number of customers who purchase both home Internet and mobile service.

Frontier is available in parts of Alabama, Arizona, California, Connecticut, Florida, Georgia, Illinois, Indiana, Iowa, Michigan, Minnesota, Mississippi, Nebraska, Nevada, New Mexico, New York, North Carolina, Ohio, Pennsylvania, South Carolina, Tennessee, Texas, Utah, West Virginia, and Wisconsin.

Verizon to buy Frontier for $9.6 billion, says it will expand fiber network Read More »

internet-archive’s-e-book-lending-is-not-fair-use,-appeals-court-rules

Internet Archive’s e-book lending is not fair use, appeals court rules

Internet Archive’s e-book lending is not fair use, appeals court rules

The Internet Archive has lost its appeal after book publishers successfully sued to block the Open Libraries Project from lending digital scans of books for free online.

Judges for the Second Circuit Court of Appeals on Wednesday rejected the Internet Archive (IA) argument that its controlled digital lending—which allows only one person to borrow each scanned e-book at a time—was a transformative fair use that worked like a traditional library and did not violate copyright law.

As Judge Beth Robinson wrote in the decision, because the IA’s digital copies of books did not “provide criticism, commentary, or information about the originals” or alter the original books to add “something new,” the court concluded that the IA’s use of publishers’ books was not transformative, hobbling the organization’s fair use defense.

“IA’s digital books serve the same exact purpose as the originals: making authors’ works available to read,” Robinson said, emphasizing that although in copyright law, “[n]ot every instance will be clear cut,” “this one is.”

The appeals court ruling affirmed the lower court’s ruling, which permanently barred the IA from distributing not just the works in the suit, but all books “available for electronic licensing,” Robinson said.

“To construe IA’s use of the Works as transformative would significantly narrow―if not entirely eviscerate―copyright owners’ exclusive right to prepare (or not prepare) derivative works,” Robinson wrote.

Maria Pallante, president and CEO of the Association of American Publishers, the trade organization behind the lawsuit, celebrated the ruling. She said the court upheld “the rights of authors and publishers to license and be compensated for their books and other creative works and reminds us in no uncertain terms that infringement is both costly and antithetical to the public interest.”

“If there was any doubt, the Court makes clear that under fair use jurisprudence there is nothing transformative about converting entire works into new formats without permission or appropriating the value of derivative works that are a key part of the author’s copyright bundle,” Pallante said.

The Internet Archive’s director of library services, Chris Freeland, issued a statement on the loss, which comes after four years of fighting to maintain its Open Libraries Project.

“We are disappointed in today’s opinion about the Internet Archive’s digital lending of books that are available electronically elsewhere,” Freeland said. “We are reviewing the court’s opinion and will continue to defend the rights of libraries to own, lend, and preserve books.”

IA’s lending harmed publishers, judge says

The court’s fair use analysis didn’t solely hinge on whether IA’s digital lending of e-books was “transformative.” Judges also had to consider book publishers’ claims that IA was profiting off e-book lending, in addition to factoring in whether each work was original, what amount of each work was being copied, and whether the IA’s e-books substituted original works, depriving authors of revenue in relevant markets.

Ultimately, for each factor, judges ruled in favor of publishers, which argued that granting IA was threatening to “‘destroy the value of [their] exclusive right to prepare derivative works,’ including the right to publish their authors’ works as e-books.”

While the IA tried to argue that book publishers’ surging profits suggested that its digital lending caused no market harms, Robinson disagreed with the IA’s experts’ “ill-supported” market analysis and took issue with IA advertising “its digital books as a free alternative to Publishers’ print and e-books.”

“IA offers effectively the same product as Publishers―full copies of the Works―but at no cost to consumers or libraries,” Robinson wrote. “At least in this context, it is difficult to compete with free.”

Robinson wrote that despite book publishers showing no proof of market harms, that lack of evidence did not support IA’s case, ruling that IA did not satisfy its burden to prove it had not harmed publishers. She further wrote that it’s common sense to agree with publishers’ characterization of harms because “IA’s digital books compete directly with Publishers’ e-books” and would deprive authors of revenue if left unchecked.

“We agree with Publishers’ assessment of market harm” and “are likewise convinced” that “unrestricted and widespread conduct of the sort engaged in by [IA] would result in a substantially adverse impact on the potential market” for publishers’ e-books, Robinson wrote. “Though Publishers have not provided empirical data to support this observation, we routinely rely on such logical inferences where appropriate” when determining fair use.

Judges did, however, side with IA on the matter of whether the nonprofit was profiting off loaning e-books for free, contradicting the lower court. The appeals court disagreed with book publishers’ claims that IA profited off e-books by soliciting donations or earning a small percentage from used books sold through referral links on its site.

“Of course, IA must solicit some funds to keep the lights on,” Robinson wrote. But “IA does not profit directly from its Free Digital Library,” and it would be “misleading” to characterize it that way.

“To hold otherwise would greatly restrain the ability of nonprofits to seek donations while making fair use of copyrighted works,” Robinson wrote.

Internet Archive’s e-book lending is not fair use, appeals court rules Read More »

doj-subpoenas-nvidia-in-deepening-ai-antitrust-probe,-report-says

DOJ subpoenas Nvidia in deepening AI antitrust probe, report says

DOJ subpoenas Nvidia in deepening AI antitrust probe, report says

The Department of Justice is reportedly deepening its probe into Nvidia. Officials have moved on from merely questioning competitors to subpoenaing Nvidia and other tech companies for evidence that could substantiate allegations that Nvidia is abusing its “dominant position in AI computing,” Bloomberg reported.

When news of the DOJ’s probe into the trillion-dollar company was first reported in June, Fast Company reported that scrutiny was intensifying merely because Nvidia was estimated to control “as much as 90 percent of the market for chips” capable of powering AI models. Experts told Fast Company that the DOJ probe might even be good for Nvidia’s business, noting that the market barely moved when the probe was first announced.

But the market’s confidence seemed to be shaken a little more on Tuesday, when Nvidia lost a “record-setting $279 billion” in market value following Bloomberg’s report. Nvidia’s losses became “the biggest single-day market-cap decline on record,” TheStreet reported.

People close to the DOJ’s investigation told Bloomberg that the DOJ’s “legally binding requests” require competitors “to provide information” on Nvidia’s suspected anticompetitive behaviors as a “dominant provider of AI processors.”

One concern is that Nvidia may be giving “preferential supply and pricing to customers who use its technology exclusively or buy its complete systems,” sources told Bloomberg. The DOJ is also reportedly probing Nvidia’s acquisition of RunAI—suspecting the deal may lock RunAI customers into using Nvidia chips.

Bloomberg’s report builds on a report last month from The Information that said that Advanced Micro Devices Inc. (AMD) and other Nvidia rivals were questioned by the DOJ—as well as third parties who could shed light on whether Nvidia potentially abused its market dominance in AI chips to pressure customers into buying more products.

According to Bloomberg’s sources, the DOJ is worried that “Nvidia is making it harder to switch to other suppliers and penalizes buyers that don’t exclusively use its artificial intelligence chips.”

In a statement to Bloomberg, Nvidia insisted that “Nvidia wins on merit, as reflected in our benchmark results and value to customers, who can choose whatever solution is best for them.” Additionally, Bloomberg noted that following a chip shortage in 2022, Nvidia CEO Jensen Huang has said that his company strives to prevent stockpiling of Nvidia’s coveted AI chips by prioritizing customers “who can make use of his products in ready-to-go data centers.”

Potential threats to Nvidia’s dominance

Despite the slump in shares, Nvidia’s market dominance seems unlikely to wane any time soon after its stock more than doubled this year. In an SEC filing this year, Nvidia bragged that its “accelerated computing ecosystem is bringing AI to every enterprise” with an “ecosystem” spanning “nearly 5 million developers and 40,000 companies.” Nvidia specifically highlighted that “more than 1,600 generative AI companies are building on Nvidia,” and according to Bloomberg, Nvidia will close out 2024 with more profits than the total sales of its closest competitor, AMD.

After the DOJ’s most recent big win, which successfully proved that Google has a monopoly on search, the DOJ appears intent on getting ahead of any tech companies’ ambitions to seize monopoly power and essentially become the Google of the AI industry. In June, DOJ antitrust chief Jonathan Kanter confirmed to the Financial Times that the DOJ is examining “monopoly choke points and the competitive landscape” in AI beyond just scrutinizing Nvidia.

According to Kanter, the DOJ is scrutinizing all aspects of the AI industry—”everything from computing power and the data used to train large language models, to cloud service providers, engineering talent and access to essential hardware such as graphics processing unit chips.” But in particular, the DOJ appears concerned that GPUs like Nvidia’s advanced AI chips remain a “scarce resource.” Kanter told the Financial Times that an “intervention” in “real time” to block a potential monopoly could be “the most meaningful intervention” and the least “invasive” as the AI industry grows.

DOJ subpoenas Nvidia in deepening AI antitrust probe, report says Read More »

tv-viewers-get-screwed-again-as-disney-channels-are-blacked-out-on-directv

TV viewers get screwed again as Disney channels are blacked out on DirecTV

Disney/DirecTV blackout —

Two days into blackout, DirecTV will fight Disney “as long as it needs to.”

A TV camera that says

Disney

Disney-owned channels have been blacked out on DirecTV for the past two days because of a contract dispute, with both companies claiming publicly that they aren’t willing to budge much from their negotiating positions. Until it’s resolved, DirecTV subscribers won’t have access to ABC, ESPN, and other Disney channels.

While there have been many contentious contract negotiations between TV providers and programmers, this one is “not a run-of-the-mill dispute,” DirecTV CFO Ray Carpenter said today in a call with reporters and analysts, according to The Hollywood Reporter. “This is not the kind of dispute where we’re haggling over percentage points on a rate. This is really about changing the model in a way that gives everyone confidence that this industry can survive.”

Carpenter was quoted as saying that DirecTV will fight Disney “as long as it needs to” and accused Disney of timing the blackout before big sporting events “to put the most pain and disruption on our customers.” Carpenter also said DirecTV doesn’t “have any dates drawn in the sand” and is “not playing a short-term game,” according to Variety.

On Sunday, Disney issued a statement attributed to three executives at Disney Entertainment and ESPN. “DirecTV chose to deny millions of subscribers access to our content just as we head into the final week of the US Open and gear up for college football and the opening of the NFL season,” the Disney statement said. “While we’re open to offering DirecTV flexibility and terms which we’ve extended to other distributors, we will not enter into an agreement that undervalues our portfolio of television channels and programs.”

DirecTV users must apply for $20 credits

DirecTV is offering $20 credits to affected customers, but the TV firm is not applying those credits automatically. Customers need to go to this webpage to request a bill credit.

AT&T owns 70 percent of DirecTV after spinning it off into a new entity in 2021. AT&T explored options for selling its 70 percent stake almost a year ago. Private equity firm TPG owns the other 30 percent.

Based on previous TV carriage fights, a DirecTV/Disney agreement could be reached within days. A similar dispute between Disney and Charter Communications happened almost exactly a year ago and was resolved after eight days.

Carpenter said today that DirecTV wants to sell smaller channel packages and that Disney’s proposed terms conflict with that goal. Variety summarized his comments:

At the heart of the dispute, says Carpenter, is a desire by DirecTV to sell “skinnied down” packages of programming tailored to various subscriber interests, rather than forcing customers to take channels they may not want or watch very often. The company believes such a model would help retain subscribers, even if they were paying less. There is also interest in helping customers find other content, even if it’s not sold directly on the service, Carpenter says.

Streaming add-ons and “skinny” bundles

Last year’s agreement between Disney and Charter included access to the Disney+ and ESPN+ streaming services for Charter’s Spectrum cable customers. Carpenter was quoted by the Hollywood Reporter as saying there is “value” in that kind of deal, “but what’s important is that it’s not a replica of the model that got us here in the first place, where it has to be distributed and paid for by 100 percent or a large percentage of the customers.”

A lobby group that represents DirecTV and other TV providers, the American Television Alliance, blasted Disney for “seek[ing] to raise rates and force distributors to carry an unwieldy ‘one-size fits all’ bundle of more than a dozen channels to the vast majority of their subscribers.” The group said Disney’s proposed terms would require TV companies to sell “fat bundles” that “force consumers to pay for programming they don’t watch.”

Disney’s statement on Sunday claimed that DirecTV rejected its offer of “a fair, marketplace-based agreement.”

“DirecTV continues to push a narrative that they want to explore more flexible, ‘skinnier’ bundles and that Disney refuses to engage,” Disney said. “This is blatantly false. Disney has been negotiating with them in good faith for weeks and has proposed a variety of flexible options, in addition to innovative ways to work together in making Disney’s direct-to-consumer streaming services available to DirecTV’s customers.”

We contacted both companies today and will update this article if there are any major developments.

Disclosure: The Advance/Newhouse Partnership, which owns 12.4 percent of Charter, is part of Advance Publications, which also owns Ars Technica parent Condé Nast.

TV viewers get screwed again as Disney channels are blacked out on DirecTV Read More »

cops’-favorite-face-image-search-engine-fined-$33m-for-privacy-violation

Cops’ favorite face image search engine fined $33M for privacy violation

Cops’ favorite face image search engine fined $33M for privacy violation

A controversial facial recognition tech company behind a vast face image search engine widely used by cops has been fined approximately $33 million in the Netherlands for serious data privacy violations.

According to the Dutch Data Protection Authority (DPA), Clearview AI “built an illegal database with billions of photos of faces” by crawling the web and without gaining consent, including from people in the Netherlands.

Clearview AI’s technology—which has been banned in some US cities over concerns that it gives law enforcement unlimited power to track people in their daily lives—works by pulling in more than 40 billion face images from the web without setting “any limitations in terms of geographical location or nationality,” the Dutch DPA found. Perhaps most concerning, the Dutch DPA said, Clearview AI also provides “facial recognition software for identifying children,” therefore indiscriminately processing personal data of minors.

Training on the face image data, the technology then makes it possible to upload a photo of anyone and search for matches on the Internet. People appearing in search results, the Dutch DPA found, can be “unambiguously” identified. Billed as a public safety resource accessible only by law enforcement, Clearview AI’s face database casts too wide a net, the Dutch DPA said, with the majority of people pulled into the tool likely never becoming subject to a police search.

“The processing of personal data is not only complex and extensive, it moreover offers Clearview’s clients the opportunity to go through data about individual persons and obtain a detailed picture of the lives of these individual persons,” the Dutch DPA said. “These processing operations therefore are highly invasive for data subjects.”

Clearview AI had no legitimate interest under the European Union’s General Data Protection Regulation (GDPR) for the company’s invasive data collection, Dutch DPA Chairman Aleid Wolfsen said in a press release. The Dutch official likened Clearview AI’s sprawling overreach to “a doom scenario from a scary film,” while emphasizing in his decision that Clearview AI has not only stopped responding to any requests to access or remove data from citizens in the Netherlands, but across the EU.

“Facial recognition is a highly intrusive technology that you cannot simply unleash on anyone in the world,” Wolfsen said. “If there is a photo of you on the Internet—and doesn’t that apply to all of us?—then you can end up in the database of Clearview and be tracked.”

To protect Dutch citizens’ privacy, the Dutch DPA imposed a roughly $33 million fine that could go up by about $5.5 million if Clearview AI does not follow orders on compliance. Any Dutch businesses attempting to use Clearview AI services could also face “hefty fines,” the Dutch DPA warned, as that “is also prohibited” under the GDPR.

Clearview AI was given three months to appoint a representative in the EU to stop processing personal data—including sensitive biometric data—in the Netherlands and to update its privacy policies to inform users in the Netherlands of their rights under the GDPR. But the company only has one month to resume processing requests for data access or removals from people in the Netherlands who otherwise find it “impossible” to exercise their rights to privacy, the Dutch DPA’s decision said.

It appears that Clearview AI has no intentions to comply, however. Jack Mulcaire, the chief legal officer for Clearview AI, confirmed to Ars that the company maintains that it is not subject to the GDPR.

“Clearview AI does not have a place of business in the Netherlands or the EU, it does not have any customers in the Netherlands or the EU, and does not undertake any activities that would otherwise mean it is subject to the GDPR,” Mulcaire said. “This decision is unlawful, devoid of due process and is unenforceable.”

But the Dutch DPA found that GDPR applies to Clearview AI because it gathers personal information about Dutch citizens without their consent and without ever alerting users to the data collection at any point.

“People who are in the database also have the right to access their data,” the Dutch DPA said. “This means that Clearview has to show people which data the company has about them, if they ask for this. But Clearview does not cooperate in requests for access.”

Dutch DPA vows to investigate Clearview AI execs

In the press release, Wolfsen said that the Dutch DPA has “to draw a very clear line” underscoring the “incorrect use of this sort of technology” after Clearview AI refused to change its data collection practices following fines in other parts of the European Union, including Italy and Greece.

While Wolfsen acknowledged that Clearview AI could be used to enhance police investigations, he said that the technology would be more appropriate if it was being managed by law enforcement “in highly exceptional cases only” and not indiscriminately by a private company.

“The company should never have built the database and is insufficiently transparent,” the Dutch DPA said.

Although Clearview AI appears ready to defend against the fine, the Dutch DPA said that the company failed to object to the decision within the provided six-week timeframe and therefore cannot appeal the decision.

Further, the Dutch DPA confirmed that authorities are “looking for ways to make sure that Clearview stops the violations” beyond the fines, including by “investigating if the directors of the company can be held personally responsible for the violations.”

Wolfsen claimed that such “liability already exists if directors know that the GDPR is being violated, have the authority to stop that, but omit to do so, and in this way consciously accept those violations.”

Cops’ favorite face image search engine fined $33M for privacy violation Read More »

city-of-columbus-sues-man-after-he-discloses-severity-of-ransomware-attack

City of Columbus sues man after he discloses severity of ransomware attack

WHISTLEBLOWER IN LEGAL CROSSHAIRS —

Mayor said data was unusable to criminals; researcher proved otherwise.

A ransom note is plastered across a laptop monitor.

A judge in Ohio has issued a temporary restraining order against a security researcher who presented evidence that a recent ransomware attack on the city of Columbus scooped up reams of sensitive personal information, contradicting claims made by city officials.

The order, issued by a judge in Ohio’s Franklin County, came after the city of Columbus fell victim to a ransomware attack on July 18 that siphoned 6.5 terabytes of the city’s data. A ransomware group known as Rhysida took credit for the attack and offered to auction off the data with a starting bid of about $1.7 million in bitcoin. On August 8, after the auction failed to find a bidder, Rhysida released what it said was about 45 percent of the stolen data on the group’s dark web site, which is accessible to anyone with a TOR browser.

Dark web not readily available to public—really?

Columbus Mayor Andrew Ginther said on August 13 that a “breakthrough” in the city’s forensic investigation of the breach found that the sensitive files Rhysida obtained were either encrypted or corrupted, making them “unusable” to the thieves. Ginther went on to say the data’s lack of integrity was likely the reason the ransomware group had been unable to auction off the data.

Shortly after Ginther made his remarks, security researcher David Leroy Ross contacted local news outlets and presented evidence that showed the data Rhysida published was fully intact and contained highly sensitive information regarding city employees and residents. Ross, who uses the alias Connor Goodwolf, presented screenshots and other data that showed the files Rhysida had posted included names from domestic violence cases and Social Security numbers for police officers and crime victims. Some of the data spanned years.

On Thursday, the city of Columbus sued Ross for alleged damages for criminal acts, invasion of privacy, negligence, and civil conversion. The lawsuit claimed that downloading documents from a dark web site run by ransomware attackers amounted to him “interacting” with them and required special expertise and tools. The suit went on to challenge Ross alerting reporters to the information, which ii claimed would not be easily obtained by others.

“Only individuals willing to navigate and interact with the criminal element on the dark web, who also have the computer expertise and tools necessary to download data from the dark web, would be able to do so,” city attorneys wrote. “The dark web-posted data is not readily available for public consumption. Defendant is making it so.”

The same day, a Franklin County judge granted the city’s motion for a temporary restraining order against Ross. It bars the researcher “from accessing, and/or downloading, and/or disseminating” any city files that were posted to the dark web. The motion was made and granted “ex parte,” meaning in secret before Ross was informed of it or had an opportunity to present his case.

In a press conference Thursday, Columbus City Attorney Zach Klein defended his decision to sue Ross and obtain the restraining order.

“This is not about freedom of speech or whistleblowing,” he said. “This is about the downloading and disclosure of stolen criminal investigatory records. This effect is to get [Ross] to stop downloading and disclosing stolen criminal records to protect public safety.”

The Columbus city attorney’s office didn’t respond to questions sent by email. It did provide the following statement:

The lawsuit filed by the City of Columbus pertains to stolen data that Mr. Ross downloaded from the dark web to his own, local device and disseminated to the media. In fact, several outlets used the stolen data provided by Ross to go door-to-door and contact individuals using names and addresses contained within the stolen data. As has now been extensively reported, Mr. Ross also showed multiple news outlets stolen, confidential data belonging to the City which he claims reveal the identities of undercover police officers and crime victims as well as evidence from active criminal investigations. Sharing this stolen data threatens public safety and the integrity of the investigations. The temporary restraining order granted by the Court prohibits Mr. Ross from disseminating any of the City’s stolen data. Mr. Ross is still free to speak about the cyber incident and even describe what kind of data is on the dark web—he just cannot disseminate that data.

Attempts to reach Ross for comment were unsuccessful. Email sent to the Columbus mayor’s office went unanswered.

A screenshot showing the Rhysida dark web site.

Enlarge / A screenshot showing the Rhysida dark web site.

As shown above in the screenshot of the Rhysida dark web site on Friday morning, the sensitive data remains available to anyone who looks for it. Friday’s order may bar Ross from accessing the data or disseminating it to reporters, but it has no effect on those who plan to use the data for malicious purposes.

City of Columbus sues man after he discloses severity of ransomware attack Read More »

harmful-“nudify”-websites-used-google,-apple,-and-discord-sign-on-systems

Harmful “nudify” websites used Google, Apple, and Discord sign-on systems

Harmful “nudify” websites used Google, Apple, and Discord sign-on systems

Major technology companies, including Google, Apple, and Discord, have been enabling people to quickly sign up to harmful “undress” websites, which use AI to remove clothes from real photos to make victims appear to be “nude” without their consent. More than a dozen of these deepfake websites have been using login buttons from the tech companies for months.

A WIRED analysis found 16 of the biggest so-called undress and “nudify” websites using the sign-in infrastructure from Google, Apple, Discord, Twitter, Patreon, and Line. This approach allows people to easily create accounts on the deepfake websites—offering them a veneer of credibility—before they pay for credits and generate images.

While bots and websites that create nonconsensual intimate images of women and girls have existed for years, the number has increased with the introduction of generative AI. This kind of “undress” abuse is alarmingly widespread, with teenage boys allegedly creating images of their classmates. Tech companies have been slow to deal with the scale of the issues, critics say, with the websites appearing highly in search results, paid advertisements promoting them on social media, and apps showing up in app stores.

“This is a continuation of a trend that normalizes sexual violence against women and girls by Big Tech,” says Adam Dodge, a lawyer and founder of EndTAB (Ending Technology-Enabled Abuse). “Sign-in APIs are tools of convenience. We should never be making sexual violence an act of convenience,” he says. “We should be putting up walls around the access to these apps, and instead we’re giving people a drawbridge.”

The sign-in tools analyzed by WIRED, which are deployed through APIs and common authentication methods, allow people to use existing accounts to join the deepfake websites. Google’s login system appeared on 16 websites, Discord’s appeared on 13, and Apple’s on six. X’s button was on three websites, with Patreon and messaging service Line’s both appearing on the same two websites.

WIRED is not naming the websites, since they enable abuse. Several are part of wider networks and owned by the same individuals or companies. The login systems have been used despite the tech companies broadly having rules that state developers cannot use their services in ways that would enable harm, harassment, or invade people’s privacy.

After being contacted by WIRED, spokespeople for Discord and Apple said they have removed the developer accounts connected to their websites. Google said it will take action against developers when it finds its terms have been violated. Patreon said it prohibits accounts that allow explicit imagery to be created, and Line confirmed it is investigating but said it could not comment on specific websites. X did not reply to a request for comment about the way its systems are being used.

In the hours after Jud Hoffman, Discord vice president of trust and safety, told WIRED it had terminated the websites’ access to its APIs for violating its developer policy, one of the undress websites posted in a Telegram channel that authorization via Discord was “temporarily unavailable” and claimed it was trying to restore access. That undress service did not respond to WIRED’s request for comment about its operations.

Harmful “nudify” websites used Google, Apple, and Discord sign-on systems Read More »

nonprofit-scrubs-illegal-content-from-controversial-ai-training-dataset

Nonprofit scrubs illegal content from controversial AI training dataset

Nonprofit scrubs illegal content from controversial AI training dataset

After Stanford Internet Observatory researcher David Thiel found links to child sexual abuse materials (CSAM) in an AI training dataset tainting image generators, the controversial dataset was immediately taken down in 2023.

Now, the LAION (Large-scale Artificial Intelligence Open Network) team has released a scrubbed version of the LAION-5B dataset called Re-LAION-5B and claimed that it “is the first web-scale, text-link to images pair dataset to be thoroughly cleaned of known links to suspected CSAM.”

To scrub the dataset, LAION partnered with the Internet Watch Foundation (IWF) and the Canadian Center for Child Protection (C3P) to remove 2,236 links that matched with hashed images in the online safety organizations’ databases. Removals include all the links flagged by Thiel, as well as content flagged by LAION’s partners and other watchdogs, like Human Rights Watch, which warned of privacy issues after finding photos of real kids included in the dataset without their consent.

In his study, Thiel warned that “the inclusion of child abuse material in AI model training data teaches tools to associate children in illicit sexual activity and uses known child abuse images to generate new, potentially realistic child abuse content.”

Thiel urged LAION and other researchers scraping the Internet for AI training data that a new safety standard was needed to better filter out not just CSAM, but any explicit imagery that could be combined with photos of children to generate CSAM. (Recently, the US Department of Justice pointedly said that “CSAM generated by AI is still CSAM.”)

While LAION’s new dataset won’t alter models that were trained on the prior dataset, LAION claimed that Re-LAION-5B sets “a new safety standard for cleaning web-scale image-link datasets.” Where before illegal content “slipped through” LAION’s filters, the researchers have now developed an improved new system “for identifying and removing illegal content,” LAION’s blog said.

Thiel told Ars that he would agree that LAION has set a new safety standard with its latest release, but “there are absolutely ways to improve it.” However, “those methods would require possession of all original images or a brand new crawl,” and LAION’s post made clear that it only utilized image hashes and did not conduct a new crawl that could have risked pulling in more illegal or sensitive content. (On Threads, Thiel shared more in-depth impressions of LAION’s effort to clean the dataset.)

LAION warned that “current state-of-the-art filters alone are not reliable enough to guarantee protection from CSAM in web scale data composition scenarios.”

“To ensure better filtering, lists of hashes of suspected links or images created by expert organizations (in our case, IWF and C3P) are suitable choices,” LAION’s blog said. “We recommend research labs and any other organizations composing datasets from the public web to partner with organizations like IWF and C3P to obtain such hash lists and use those for filtering. In the longer term, a larger common initiative can be created that makes such hash lists available for the research community working on dataset composition from web.”

According to LAION, the bigger concern is that some links to known CSAM scraped into a 2022 dataset are still active more than a year later.

“It is a clear hint that law enforcement bodies have to intensify the efforts to take down domains that host such image content on public web following information and recommendations by organizations like IWF and C3P, making it a safer place, also for various kinds of research related activities,” LAION’s blog said.

HRW researcher Hye Jung Han praised LAION for removing sensitive data that she flagged, while also urging more interventions.

“LAION’s responsive removal of some children’s personal photos from their dataset is very welcome, and will help to protect these children from their likenesses being misused by AI systems,” Han told Ars. “It’s now up to governments to pass child data protection laws that would protect all children’s privacy online.”

Although LAION’s blog said that the content removals represented an “upper bound” of CSAM that existed in the initial dataset, AI specialist and Creative.AI co-founder Alex Champandard told Ars that he’s skeptical that all CSAM was removed.

“They only filter out previously identified CSAM, which is only a partial solution,” Champandard told Ars. “Statistically speaking, most instances of CSAM have likely never been reported nor investigated by C3P or IWF. A more reasonable estimate of the problem is about 25,000 instances of things you’d never want to train generative models on—maybe even 50,000.”

Champandard agreed with Han that more regulations are needed to protect people from AI harms when training data is scraped from the web.

“There’s room for improvement on all fronts: privacy, copyright, illegal content, etc.,” Champandard said. Because “there are too many data rights being broken with such web-scraped datasets,” Champandard suggested that datasets like LAION’s won’t “stand the test of time.”

“LAION is simply operating in the regulatory gap and lag in the judiciary system until policymakers realize the magnitude of the problem,” Champandard said.

Nonprofit scrubs illegal content from controversial AI training dataset Read More »

texas-judge-decides-texas-is-a-perfectly-good-venue-for-x-to-sue-media-matters

Texas judge decides Texas is a perfectly good venue for X to sue Media Matters

Elon Musk speaks at an event while wearing a cowboy hat, sunglasses, and T-shirt.

Enlarge / Tesla CEO Elon Musk speaks at Tesla’s “Cyber Rodeo” on April 7, 2022, in Austin, Texas.

Getty Images | AFP/Suzanne Cordeiro

A federal judge in Texas yesterday ruled that Elon Musk’s X Corp. can continue its lawsuit against Media Matters for America. US District Judge Reed O’Connor of the Northern District of Texas, who recently refused to recuse himself from the case despite having purchased Tesla stock, denied Media Matters’ motion to dismiss.

X Corp. sued Media Matters after the nonprofit watchdog group published research on ads being placed next to pro-Nazi content on X, formerly Twitter. X’s lawsuit also names reporter Eric Hananoki and Media Matters President Angelo Carusone as defendants.

Because of O’Connor’s ruling, X can move ahead with its claims of tortious interference with contract, business disparagement, and tortious interference with prospective economic advantage. A jury trial is scheduled to begin on April 7, 2025.

“Plaintiff alleges that Defendants knowingly and maliciously fabricated side-by-side images of various advertisers’ posts on Plaintiff’s social media platform X depicted next to neo-Nazi or other extremist content, and portrayed these designed images as if they were what the average user experiences on the X platform,” O’Connor wrote in his ruling on the motion to dismiss. “Plaintiff asserts that Defendants proceeded with this course of action in an effort to publicly portray X as a social media platform dominated by neo-Nazism and anti-Semitism, and thereby alienate major advertisers, publishers, and users away from the X platform, intending to harm it.”

A different federal judge in the District of Columbia recently criticized X’s claims, pointing out that “X did not deny that advertising in fact had appeared next to the extremist posts on the day in question.” But X has a more friendly judge in O’Connor, who has made several rulings against Media Matters. The defendant could also face a tough road on appeal because challenges would go to the conservative-leaning US Court of Appeals for the 5th Circuit.

Judge: Media Matters “targeted” Texas-based advertisers

Media Matters’ motion to dismiss argues in part that Texas is an improper forum for the dispute because “X is organized under Nevada law and maintains its principal place of business in San Francisco, California, where its own terms of service require users of its platform to litigate any disputes.” (Musk recently said that X will move its headquarters from San Francisco to Austin, Texas.)

O’Connor’s ruling acknowledges that “when a nonresident defendant files a motion to dismiss for lack of personal jurisdiction, the burden of proof is on the plaintiff as the party seeking to invoke the district court’s jurisdiction.” In this case, O’Connor said that jurisdiction is established if the defendants “targeted the conduct that is the basis for this lawsuit at Texas.”

O’Connor ruled that the court has jurisdiction because Media Matters articles “targeted” Texas-based companies that advertised on X, specifically Oracle and AT&T, even though those companies are not parties to the lawsuit. O’Connor said the Media Matters “articles targeted, among others, Oracle, a Texas-based company that placed ads on Plaintiff’s platform… Plaintiff also alleges that this ‘crusade’ targeted its blue-chip advertisers which included Oracle and AT&T, Texas-based companies.”

O’Connor, a George W. Bush appointee, wrote that a “defendant who targets a Texas company with tortious activity has fair warning that it may be sued there.”

“This targeting of the alleged tortious acts at the headquarters of Texas-based companies is sufficient to establish specific jurisdiction in Texas… each Defendant engaged in the alleged tortious acts which targeted harm in, among other places, Texas,” he wrote.

Judge cites TV appearances

That includes Hananoki, the Media Matters reporter who wrote the articles, and Carusone. Each of those individual defendants “targeted” the conduct at Texas, O’Connor found.

“Plaintiff alleges Carusone participated in the ‘crusade’ with Hananoki and Media Matters when he appeared on television shows a number of times discussing the importance of advertisers to Plaintiff’s business model and advocating that advertisers should cease doing business with Plaintiff if there is a deluge of ‘unmoderated right-wing hatred and misinformation,'” O’Connor wrote.

Ruling that “Media Matters targeted Texas,” O’Connor wrote that the group pursued “a strategy to target Plaintiff’s blue-chip advertisers, including Oracle and AT&T, Texas-based companies; in furtherance of this strategy it published the Hananoki articles, and it published other articles pressuring the blue-chip advertisers, all to pressure blue-chip advertisers to cease doing business with Plaintiff. Finally, the inference from Media Matters’ affidavit is that Media Matters also emailed the Hananoki articles to Texans, and Plaintiff’s lawsuit arises out of this conduct.”

Media Matters also sought dismissal on the basis that X failed to state a claim. But O’Connor said that “the Court must accept all well-pleaded facts in the complaint as true and view them in the light most favorable to the plaintiff,” and he found that X “has provided sufficient allegations to survive dismissal.”

Media Matters declined to comment when contacted by Ars today.

Texas judge decides Texas is a perfectly good venue for X to sue Media Matters Read More »

us:-alaska-man-busted-with-10,000+-child-sex-abuse-images-despite-his-many-encrypted-apps

US: Alaska man busted with 10,000+ child sex abuse images despite his many encrypted apps

click here —

Encryption alone won’t save you from the feds.

Stylized illustration of a padlock.

The rise in child sexual abuse material (CSAM) has been one of the darkest Internet trends, but after years of covering CSAM cases, I’ve found that few of those arrested show deep technical sophistication. (Perhaps this is simply because the technically sophisticated are better at avoiding arrest.)

Most understand that what they are doing is illegal and that password protection is required, both for their devices and online communities. Some can also use tools like TOR (The Onion Router). And, increasingly, encrypted (or at least encrypted-capable) chat apps might be in play.

But I’ve never seen anyone who, when arrested, had three Samsung Galaxy phones filled with “tens of thousands of videos and images” depicting CSAM, all of it hidden behind a secrecy-focused, password-protected app called “Calculator Photo Vault.” Nor have I seen anyone arrested for CSAM having used all of the following:

  • Potato Chat (“Use the most advanced encryption technology to ensure information security.”)
  • Enigma (“The server only stores the encrypted message, and only the users client can decrypt it.”)
  • nandbox [presumably the Messenger app] (“Free Secured Calls & Messages,”)
  • Telegram (“To this day, we have disclosed 0 bytes of user data to third parties, including governments.”)
  • TOR (“Browse Privately. Explore Freely.”)
  • Mega NZ (“We use zero-knowledge encryption.”)
  • Web-based generative AI tools/chatbots

That’s what made this week’s indictment in Alaska of a heavy vehicle driver for the US military so unusual.

According to the government, Seth Herrera not only used all of these tools to store and download CSAM, but he also created his own—and in two disturbing varieties. First, he allegedly recorded nude minor children himself and later “zoomed in on and enhanced those images using AI-powered technology.”

Secondly, he took this imagery he had created and then “turned to AI chatbots to ensure these minor victims would be depicted as if they had engaged in the type of sexual contact he wanted to see.” In other words, he created fake AI CSAM—but using imagery of real kids.

The material was allegedly stored behind password protection on his phone(s) but also on Mega and on Telegram, where Herrera is said to have “created his own public Telegram group to store his CSAM.” He also joined “multiple CSAM-related Enigma groups” and frequented dark websites with taglines like “The Only Child Porn Site you need!”

Despite all the precautions, Herrera’s home was searched and his phones were seized by Homeland Security Investigations; he was eventually arrested on August 23. In a court filing that day, a government attorney noted that Herrera “was arrested this morning with another smartphone—the same make and model as one of his previously seized devices.”

Caught anyway

The government is cagey about how, exactly, this criminal activity was unearthed, noting only that Herrera “tried to access a link containing apparent CSAM.” Presumably, this “apparent” CSAM was a government honeypot file or web-based redirect that logged the IP address and any other relevant information of anyone who clicked on it.

In the end, given that fatal click, none of the “I’ll hide it behind an encrypted app that looks like a calculator!” technical sophistication accomplished much. Forensic reviews of Herrera’s three phones now form the primary basis for the charges against him, and Herrera himself allegedly “admitted to seeing CSAM online for the past year and a half” in an interview with the feds.

Since Herrera himself has a young daughter, and since there are “six children living within his fourplex alone” on Joint Base Elmendorf-Richardson, the government has asked a judge not to release Herrera on bail before his trial.

US: Alaska man busted with 10,000+ child sex abuse images despite his many encrypted apps Read More »