Policy

doj-subpoenas-nvidia-in-deepening-ai-antitrust-probe,-report-says

DOJ subpoenas Nvidia in deepening AI antitrust probe, report says

DOJ subpoenas Nvidia in deepening AI antitrust probe, report says

The Department of Justice is reportedly deepening its probe into Nvidia. Officials have moved on from merely questioning competitors to subpoenaing Nvidia and other tech companies for evidence that could substantiate allegations that Nvidia is abusing its “dominant position in AI computing,” Bloomberg reported.

When news of the DOJ’s probe into the trillion-dollar company was first reported in June, Fast Company reported that scrutiny was intensifying merely because Nvidia was estimated to control “as much as 90 percent of the market for chips” capable of powering AI models. Experts told Fast Company that the DOJ probe might even be good for Nvidia’s business, noting that the market barely moved when the probe was first announced.

But the market’s confidence seemed to be shaken a little more on Tuesday, when Nvidia lost a “record-setting $279 billion” in market value following Bloomberg’s report. Nvidia’s losses became “the biggest single-day market-cap decline on record,” TheStreet reported.

People close to the DOJ’s investigation told Bloomberg that the DOJ’s “legally binding requests” require competitors “to provide information” on Nvidia’s suspected anticompetitive behaviors as a “dominant provider of AI processors.”

One concern is that Nvidia may be giving “preferential supply and pricing to customers who use its technology exclusively or buy its complete systems,” sources told Bloomberg. The DOJ is also reportedly probing Nvidia’s acquisition of RunAI—suspecting the deal may lock RunAI customers into using Nvidia chips.

Bloomberg’s report builds on a report last month from The Information that said that Advanced Micro Devices Inc. (AMD) and other Nvidia rivals were questioned by the DOJ—as well as third parties who could shed light on whether Nvidia potentially abused its market dominance in AI chips to pressure customers into buying more products.

According to Bloomberg’s sources, the DOJ is worried that “Nvidia is making it harder to switch to other suppliers and penalizes buyers that don’t exclusively use its artificial intelligence chips.”

In a statement to Bloomberg, Nvidia insisted that “Nvidia wins on merit, as reflected in our benchmark results and value to customers, who can choose whatever solution is best for them.” Additionally, Bloomberg noted that following a chip shortage in 2022, Nvidia CEO Jensen Huang has said that his company strives to prevent stockpiling of Nvidia’s coveted AI chips by prioritizing customers “who can make use of his products in ready-to-go data centers.”

Potential threats to Nvidia’s dominance

Despite the slump in shares, Nvidia’s market dominance seems unlikely to wane any time soon after its stock more than doubled this year. In an SEC filing this year, Nvidia bragged that its “accelerated computing ecosystem is bringing AI to every enterprise” with an “ecosystem” spanning “nearly 5 million developers and 40,000 companies.” Nvidia specifically highlighted that “more than 1,600 generative AI companies are building on Nvidia,” and according to Bloomberg, Nvidia will close out 2024 with more profits than the total sales of its closest competitor, AMD.

After the DOJ’s most recent big win, which successfully proved that Google has a monopoly on search, the DOJ appears intent on getting ahead of any tech companies’ ambitions to seize monopoly power and essentially become the Google of the AI industry. In June, DOJ antitrust chief Jonathan Kanter confirmed to the Financial Times that the DOJ is examining “monopoly choke points and the competitive landscape” in AI beyond just scrutinizing Nvidia.

According to Kanter, the DOJ is scrutinizing all aspects of the AI industry—”everything from computing power and the data used to train large language models, to cloud service providers, engineering talent and access to essential hardware such as graphics processing unit chips.” But in particular, the DOJ appears concerned that GPUs like Nvidia’s advanced AI chips remain a “scarce resource.” Kanter told the Financial Times that an “intervention” in “real time” to block a potential monopoly could be “the most meaningful intervention” and the least “invasive” as the AI industry grows.

DOJ subpoenas Nvidia in deepening AI antitrust probe, report says Read More »

tv-viewers-get-screwed-again-as-disney-channels-are-blacked-out-on-directv

TV viewers get screwed again as Disney channels are blacked out on DirecTV

Disney/DirecTV blackout —

Two days into blackout, DirecTV will fight Disney “as long as it needs to.”

A TV camera that says

Disney

Disney-owned channels have been blacked out on DirecTV for the past two days because of a contract dispute, with both companies claiming publicly that they aren’t willing to budge much from their negotiating positions. Until it’s resolved, DirecTV subscribers won’t have access to ABC, ESPN, and other Disney channels.

While there have been many contentious contract negotiations between TV providers and programmers, this one is “not a run-of-the-mill dispute,” DirecTV CFO Ray Carpenter said today in a call with reporters and analysts, according to The Hollywood Reporter. “This is not the kind of dispute where we’re haggling over percentage points on a rate. This is really about changing the model in a way that gives everyone confidence that this industry can survive.”

Carpenter was quoted as saying that DirecTV will fight Disney “as long as it needs to” and accused Disney of timing the blackout before big sporting events “to put the most pain and disruption on our customers.” Carpenter also said DirecTV doesn’t “have any dates drawn in the sand” and is “not playing a short-term game,” according to Variety.

On Sunday, Disney issued a statement attributed to three executives at Disney Entertainment and ESPN. “DirecTV chose to deny millions of subscribers access to our content just as we head into the final week of the US Open and gear up for college football and the opening of the NFL season,” the Disney statement said. “While we’re open to offering DirecTV flexibility and terms which we’ve extended to other distributors, we will not enter into an agreement that undervalues our portfolio of television channels and programs.”

DirecTV users must apply for $20 credits

DirecTV is offering $20 credits to affected customers, but the TV firm is not applying those credits automatically. Customers need to go to this webpage to request a bill credit.

AT&T owns 70 percent of DirecTV after spinning it off into a new entity in 2021. AT&T explored options for selling its 70 percent stake almost a year ago. Private equity firm TPG owns the other 30 percent.

Based on previous TV carriage fights, a DirecTV/Disney agreement could be reached within days. A similar dispute between Disney and Charter Communications happened almost exactly a year ago and was resolved after eight days.

Carpenter said today that DirecTV wants to sell smaller channel packages and that Disney’s proposed terms conflict with that goal. Variety summarized his comments:

At the heart of the dispute, says Carpenter, is a desire by DirecTV to sell “skinnied down” packages of programming tailored to various subscriber interests, rather than forcing customers to take channels they may not want or watch very often. The company believes such a model would help retain subscribers, even if they were paying less. There is also interest in helping customers find other content, even if it’s not sold directly on the service, Carpenter says.

Streaming add-ons and “skinny” bundles

Last year’s agreement between Disney and Charter included access to the Disney+ and ESPN+ streaming services for Charter’s Spectrum cable customers. Carpenter was quoted by the Hollywood Reporter as saying there is “value” in that kind of deal, “but what’s important is that it’s not a replica of the model that got us here in the first place, where it has to be distributed and paid for by 100 percent or a large percentage of the customers.”

A lobby group that represents DirecTV and other TV providers, the American Television Alliance, blasted Disney for “seek[ing] to raise rates and force distributors to carry an unwieldy ‘one-size fits all’ bundle of more than a dozen channels to the vast majority of their subscribers.” The group said Disney’s proposed terms would require TV companies to sell “fat bundles” that “force consumers to pay for programming they don’t watch.”

Disney’s statement on Sunday claimed that DirecTV rejected its offer of “a fair, marketplace-based agreement.”

“DirecTV continues to push a narrative that they want to explore more flexible, ‘skinnier’ bundles and that Disney refuses to engage,” Disney said. “This is blatantly false. Disney has been negotiating with them in good faith for weeks and has proposed a variety of flexible options, in addition to innovative ways to work together in making Disney’s direct-to-consumer streaming services available to DirecTV’s customers.”

We contacted both companies today and will update this article if there are any major developments.

Disclosure: The Advance/Newhouse Partnership, which owns 12.4 percent of Charter, is part of Advance Publications, which also owns Ars Technica parent Condé Nast.

TV viewers get screwed again as Disney channels are blacked out on DirecTV Read More »

cops’-favorite-face-image-search-engine-fined-$33m-for-privacy-violation

Cops’ favorite face image search engine fined $33M for privacy violation

Cops’ favorite face image search engine fined $33M for privacy violation

A controversial facial recognition tech company behind a vast face image search engine widely used by cops has been fined approximately $33 million in the Netherlands for serious data privacy violations.

According to the Dutch Data Protection Authority (DPA), Clearview AI “built an illegal database with billions of photos of faces” by crawling the web and without gaining consent, including from people in the Netherlands.

Clearview AI’s technology—which has been banned in some US cities over concerns that it gives law enforcement unlimited power to track people in their daily lives—works by pulling in more than 40 billion face images from the web without setting “any limitations in terms of geographical location or nationality,” the Dutch DPA found. Perhaps most concerning, the Dutch DPA said, Clearview AI also provides “facial recognition software for identifying children,” therefore indiscriminately processing personal data of minors.

Training on the face image data, the technology then makes it possible to upload a photo of anyone and search for matches on the Internet. People appearing in search results, the Dutch DPA found, can be “unambiguously” identified. Billed as a public safety resource accessible only by law enforcement, Clearview AI’s face database casts too wide a net, the Dutch DPA said, with the majority of people pulled into the tool likely never becoming subject to a police search.

“The processing of personal data is not only complex and extensive, it moreover offers Clearview’s clients the opportunity to go through data about individual persons and obtain a detailed picture of the lives of these individual persons,” the Dutch DPA said. “These processing operations therefore are highly invasive for data subjects.”

Clearview AI had no legitimate interest under the European Union’s General Data Protection Regulation (GDPR) for the company’s invasive data collection, Dutch DPA Chairman Aleid Wolfsen said in a press release. The Dutch official likened Clearview AI’s sprawling overreach to “a doom scenario from a scary film,” while emphasizing in his decision that Clearview AI has not only stopped responding to any requests to access or remove data from citizens in the Netherlands, but across the EU.

“Facial recognition is a highly intrusive technology that you cannot simply unleash on anyone in the world,” Wolfsen said. “If there is a photo of you on the Internet—and doesn’t that apply to all of us?—then you can end up in the database of Clearview and be tracked.”

To protect Dutch citizens’ privacy, the Dutch DPA imposed a roughly $33 million fine that could go up by about $5.5 million if Clearview AI does not follow orders on compliance. Any Dutch businesses attempting to use Clearview AI services could also face “hefty fines,” the Dutch DPA warned, as that “is also prohibited” under the GDPR.

Clearview AI was given three months to appoint a representative in the EU to stop processing personal data—including sensitive biometric data—in the Netherlands and to update its privacy policies to inform users in the Netherlands of their rights under the GDPR. But the company only has one month to resume processing requests for data access or removals from people in the Netherlands who otherwise find it “impossible” to exercise their rights to privacy, the Dutch DPA’s decision said.

It appears that Clearview AI has no intentions to comply, however. Jack Mulcaire, the chief legal officer for Clearview AI, confirmed to Ars that the company maintains that it is not subject to the GDPR.

“Clearview AI does not have a place of business in the Netherlands or the EU, it does not have any customers in the Netherlands or the EU, and does not undertake any activities that would otherwise mean it is subject to the GDPR,” Mulcaire said. “This decision is unlawful, devoid of due process and is unenforceable.”

But the Dutch DPA found that GDPR applies to Clearview AI because it gathers personal information about Dutch citizens without their consent and without ever alerting users to the data collection at any point.

“People who are in the database also have the right to access their data,” the Dutch DPA said. “This means that Clearview has to show people which data the company has about them, if they ask for this. But Clearview does not cooperate in requests for access.”

Dutch DPA vows to investigate Clearview AI execs

In the press release, Wolfsen said that the Dutch DPA has “to draw a very clear line” underscoring the “incorrect use of this sort of technology” after Clearview AI refused to change its data collection practices following fines in other parts of the European Union, including Italy and Greece.

While Wolfsen acknowledged that Clearview AI could be used to enhance police investigations, he said that the technology would be more appropriate if it was being managed by law enforcement “in highly exceptional cases only” and not indiscriminately by a private company.

“The company should never have built the database and is insufficiently transparent,” the Dutch DPA said.

Although Clearview AI appears ready to defend against the fine, the Dutch DPA said that the company failed to object to the decision within the provided six-week timeframe and therefore cannot appeal the decision.

Further, the Dutch DPA confirmed that authorities are “looking for ways to make sure that Clearview stops the violations” beyond the fines, including by “investigating if the directors of the company can be held personally responsible for the violations.”

Wolfsen claimed that such “liability already exists if directors know that the GDPR is being violated, have the authority to stop that, but omit to do so, and in this way consciously accept those violations.”

Cops’ favorite face image search engine fined $33M for privacy violation Read More »

city-of-columbus-sues-man-after-he-discloses-severity-of-ransomware-attack

City of Columbus sues man after he discloses severity of ransomware attack

WHISTLEBLOWER IN LEGAL CROSSHAIRS —

Mayor said data was unusable to criminals; researcher proved otherwise.

A ransom note is plastered across a laptop monitor.

A judge in Ohio has issued a temporary restraining order against a security researcher who presented evidence that a recent ransomware attack on the city of Columbus scooped up reams of sensitive personal information, contradicting claims made by city officials.

The order, issued by a judge in Ohio’s Franklin County, came after the city of Columbus fell victim to a ransomware attack on July 18 that siphoned 6.5 terabytes of the city’s data. A ransomware group known as Rhysida took credit for the attack and offered to auction off the data with a starting bid of about $1.7 million in bitcoin. On August 8, after the auction failed to find a bidder, Rhysida released what it said was about 45 percent of the stolen data on the group’s dark web site, which is accessible to anyone with a TOR browser.

Dark web not readily available to public—really?

Columbus Mayor Andrew Ginther said on August 13 that a “breakthrough” in the city’s forensic investigation of the breach found that the sensitive files Rhysida obtained were either encrypted or corrupted, making them “unusable” to the thieves. Ginther went on to say the data’s lack of integrity was likely the reason the ransomware group had been unable to auction off the data.

Shortly after Ginther made his remarks, security researcher David Leroy Ross contacted local news outlets and presented evidence that showed the data Rhysida published was fully intact and contained highly sensitive information regarding city employees and residents. Ross, who uses the alias Connor Goodwolf, presented screenshots and other data that showed the files Rhysida had posted included names from domestic violence cases and Social Security numbers for police officers and crime victims. Some of the data spanned years.

On Thursday, the city of Columbus sued Ross for alleged damages for criminal acts, invasion of privacy, negligence, and civil conversion. The lawsuit claimed that downloading documents from a dark web site run by ransomware attackers amounted to him “interacting” with them and required special expertise and tools. The suit went on to challenge Ross alerting reporters to the information, which ii claimed would not be easily obtained by others.

“Only individuals willing to navigate and interact with the criminal element on the dark web, who also have the computer expertise and tools necessary to download data from the dark web, would be able to do so,” city attorneys wrote. “The dark web-posted data is not readily available for public consumption. Defendant is making it so.”

The same day, a Franklin County judge granted the city’s motion for a temporary restraining order against Ross. It bars the researcher “from accessing, and/or downloading, and/or disseminating” any city files that were posted to the dark web. The motion was made and granted “ex parte,” meaning in secret before Ross was informed of it or had an opportunity to present his case.

In a press conference Thursday, Columbus City Attorney Zach Klein defended his decision to sue Ross and obtain the restraining order.

“This is not about freedom of speech or whistleblowing,” he said. “This is about the downloading and disclosure of stolen criminal investigatory records. This effect is to get [Ross] to stop downloading and disclosing stolen criminal records to protect public safety.”

The Columbus city attorney’s office didn’t respond to questions sent by email. It did provide the following statement:

The lawsuit filed by the City of Columbus pertains to stolen data that Mr. Ross downloaded from the dark web to his own, local device and disseminated to the media. In fact, several outlets used the stolen data provided by Ross to go door-to-door and contact individuals using names and addresses contained within the stolen data. As has now been extensively reported, Mr. Ross also showed multiple news outlets stolen, confidential data belonging to the City which he claims reveal the identities of undercover police officers and crime victims as well as evidence from active criminal investigations. Sharing this stolen data threatens public safety and the integrity of the investigations. The temporary restraining order granted by the Court prohibits Mr. Ross from disseminating any of the City’s stolen data. Mr. Ross is still free to speak about the cyber incident and even describe what kind of data is on the dark web—he just cannot disseminate that data.

Attempts to reach Ross for comment were unsuccessful. Email sent to the Columbus mayor’s office went unanswered.

A screenshot showing the Rhysida dark web site.

Enlarge / A screenshot showing the Rhysida dark web site.

As shown above in the screenshot of the Rhysida dark web site on Friday morning, the sensitive data remains available to anyone who looks for it. Friday’s order may bar Ross from accessing the data or disseminating it to reporters, but it has no effect on those who plan to use the data for malicious purposes.

City of Columbus sues man after he discloses severity of ransomware attack Read More »

harmful-“nudify”-websites-used-google,-apple,-and-discord-sign-on-systems

Harmful “nudify” websites used Google, Apple, and Discord sign-on systems

Harmful “nudify” websites used Google, Apple, and Discord sign-on systems

Major technology companies, including Google, Apple, and Discord, have been enabling people to quickly sign up to harmful “undress” websites, which use AI to remove clothes from real photos to make victims appear to be “nude” without their consent. More than a dozen of these deepfake websites have been using login buttons from the tech companies for months.

A WIRED analysis found 16 of the biggest so-called undress and “nudify” websites using the sign-in infrastructure from Google, Apple, Discord, Twitter, Patreon, and Line. This approach allows people to easily create accounts on the deepfake websites—offering them a veneer of credibility—before they pay for credits and generate images.

While bots and websites that create nonconsensual intimate images of women and girls have existed for years, the number has increased with the introduction of generative AI. This kind of “undress” abuse is alarmingly widespread, with teenage boys allegedly creating images of their classmates. Tech companies have been slow to deal with the scale of the issues, critics say, with the websites appearing highly in search results, paid advertisements promoting them on social media, and apps showing up in app stores.

“This is a continuation of a trend that normalizes sexual violence against women and girls by Big Tech,” says Adam Dodge, a lawyer and founder of EndTAB (Ending Technology-Enabled Abuse). “Sign-in APIs are tools of convenience. We should never be making sexual violence an act of convenience,” he says. “We should be putting up walls around the access to these apps, and instead we’re giving people a drawbridge.”

The sign-in tools analyzed by WIRED, which are deployed through APIs and common authentication methods, allow people to use existing accounts to join the deepfake websites. Google’s login system appeared on 16 websites, Discord’s appeared on 13, and Apple’s on six. X’s button was on three websites, with Patreon and messaging service Line’s both appearing on the same two websites.

WIRED is not naming the websites, since they enable abuse. Several are part of wider networks and owned by the same individuals or companies. The login systems have been used despite the tech companies broadly having rules that state developers cannot use their services in ways that would enable harm, harassment, or invade people’s privacy.

After being contacted by WIRED, spokespeople for Discord and Apple said they have removed the developer accounts connected to their websites. Google said it will take action against developers when it finds its terms have been violated. Patreon said it prohibits accounts that allow explicit imagery to be created, and Line confirmed it is investigating but said it could not comment on specific websites. X did not reply to a request for comment about the way its systems are being used.

In the hours after Jud Hoffman, Discord vice president of trust and safety, told WIRED it had terminated the websites’ access to its APIs for violating its developer policy, one of the undress websites posted in a Telegram channel that authorization via Discord was “temporarily unavailable” and claimed it was trying to restore access. That undress service did not respond to WIRED’s request for comment about its operations.

Harmful “nudify” websites used Google, Apple, and Discord sign-on systems Read More »

nonprofit-scrubs-illegal-content-from-controversial-ai-training-dataset

Nonprofit scrubs illegal content from controversial AI training dataset

Nonprofit scrubs illegal content from controversial AI training dataset

After Stanford Internet Observatory researcher David Thiel found links to child sexual abuse materials (CSAM) in an AI training dataset tainting image generators, the controversial dataset was immediately taken down in 2023.

Now, the LAION (Large-scale Artificial Intelligence Open Network) team has released a scrubbed version of the LAION-5B dataset called Re-LAION-5B and claimed that it “is the first web-scale, text-link to images pair dataset to be thoroughly cleaned of known links to suspected CSAM.”

To scrub the dataset, LAION partnered with the Internet Watch Foundation (IWF) and the Canadian Center for Child Protection (C3P) to remove 2,236 links that matched with hashed images in the online safety organizations’ databases. Removals include all the links flagged by Thiel, as well as content flagged by LAION’s partners and other watchdogs, like Human Rights Watch, which warned of privacy issues after finding photos of real kids included in the dataset without their consent.

In his study, Thiel warned that “the inclusion of child abuse material in AI model training data teaches tools to associate children in illicit sexual activity and uses known child abuse images to generate new, potentially realistic child abuse content.”

Thiel urged LAION and other researchers scraping the Internet for AI training data that a new safety standard was needed to better filter out not just CSAM, but any explicit imagery that could be combined with photos of children to generate CSAM. (Recently, the US Department of Justice pointedly said that “CSAM generated by AI is still CSAM.”)

While LAION’s new dataset won’t alter models that were trained on the prior dataset, LAION claimed that Re-LAION-5B sets “a new safety standard for cleaning web-scale image-link datasets.” Where before illegal content “slipped through” LAION’s filters, the researchers have now developed an improved new system “for identifying and removing illegal content,” LAION’s blog said.

Thiel told Ars that he would agree that LAION has set a new safety standard with its latest release, but “there are absolutely ways to improve it.” However, “those methods would require possession of all original images or a brand new crawl,” and LAION’s post made clear that it only utilized image hashes and did not conduct a new crawl that could have risked pulling in more illegal or sensitive content. (On Threads, Thiel shared more in-depth impressions of LAION’s effort to clean the dataset.)

LAION warned that “current state-of-the-art filters alone are not reliable enough to guarantee protection from CSAM in web scale data composition scenarios.”

“To ensure better filtering, lists of hashes of suspected links or images created by expert organizations (in our case, IWF and C3P) are suitable choices,” LAION’s blog said. “We recommend research labs and any other organizations composing datasets from the public web to partner with organizations like IWF and C3P to obtain such hash lists and use those for filtering. In the longer term, a larger common initiative can be created that makes such hash lists available for the research community working on dataset composition from web.”

According to LAION, the bigger concern is that some links to known CSAM scraped into a 2022 dataset are still active more than a year later.

“It is a clear hint that law enforcement bodies have to intensify the efforts to take down domains that host such image content on public web following information and recommendations by organizations like IWF and C3P, making it a safer place, also for various kinds of research related activities,” LAION’s blog said.

HRW researcher Hye Jung Han praised LAION for removing sensitive data that she flagged, while also urging more interventions.

“LAION’s responsive removal of some children’s personal photos from their dataset is very welcome, and will help to protect these children from their likenesses being misused by AI systems,” Han told Ars. “It’s now up to governments to pass child data protection laws that would protect all children’s privacy online.”

Although LAION’s blog said that the content removals represented an “upper bound” of CSAM that existed in the initial dataset, AI specialist and Creative.AI co-founder Alex Champandard told Ars that he’s skeptical that all CSAM was removed.

“They only filter out previously identified CSAM, which is only a partial solution,” Champandard told Ars. “Statistically speaking, most instances of CSAM have likely never been reported nor investigated by C3P or IWF. A more reasonable estimate of the problem is about 25,000 instances of things you’d never want to train generative models on—maybe even 50,000.”

Champandard agreed with Han that more regulations are needed to protect people from AI harms when training data is scraped from the web.

“There’s room for improvement on all fronts: privacy, copyright, illegal content, etc.,” Champandard said. Because “there are too many data rights being broken with such web-scraped datasets,” Champandard suggested that datasets like LAION’s won’t “stand the test of time.”

“LAION is simply operating in the regulatory gap and lag in the judiciary system until policymakers realize the magnitude of the problem,” Champandard said.

Nonprofit scrubs illegal content from controversial AI training dataset Read More »

texas-judge-decides-texas-is-a-perfectly-good-venue-for-x-to-sue-media-matters

Texas judge decides Texas is a perfectly good venue for X to sue Media Matters

Elon Musk speaks at an event while wearing a cowboy hat, sunglasses, and T-shirt.

Enlarge / Tesla CEO Elon Musk speaks at Tesla’s “Cyber Rodeo” on April 7, 2022, in Austin, Texas.

Getty Images | AFP/Suzanne Cordeiro

A federal judge in Texas yesterday ruled that Elon Musk’s X Corp. can continue its lawsuit against Media Matters for America. US District Judge Reed O’Connor of the Northern District of Texas, who recently refused to recuse himself from the case despite having purchased Tesla stock, denied Media Matters’ motion to dismiss.

X Corp. sued Media Matters after the nonprofit watchdog group published research on ads being placed next to pro-Nazi content on X, formerly Twitter. X’s lawsuit also names reporter Eric Hananoki and Media Matters President Angelo Carusone as defendants.

Because of O’Connor’s ruling, X can move ahead with its claims of tortious interference with contract, business disparagement, and tortious interference with prospective economic advantage. A jury trial is scheduled to begin on April 7, 2025.

“Plaintiff alleges that Defendants knowingly and maliciously fabricated side-by-side images of various advertisers’ posts on Plaintiff’s social media platform X depicted next to neo-Nazi or other extremist content, and portrayed these designed images as if they were what the average user experiences on the X platform,” O’Connor wrote in his ruling on the motion to dismiss. “Plaintiff asserts that Defendants proceeded with this course of action in an effort to publicly portray X as a social media platform dominated by neo-Nazism and anti-Semitism, and thereby alienate major advertisers, publishers, and users away from the X platform, intending to harm it.”

A different federal judge in the District of Columbia recently criticized X’s claims, pointing out that “X did not deny that advertising in fact had appeared next to the extremist posts on the day in question.” But X has a more friendly judge in O’Connor, who has made several rulings against Media Matters. The defendant could also face a tough road on appeal because challenges would go to the conservative-leaning US Court of Appeals for the 5th Circuit.

Judge: Media Matters “targeted” Texas-based advertisers

Media Matters’ motion to dismiss argues in part that Texas is an improper forum for the dispute because “X is organized under Nevada law and maintains its principal place of business in San Francisco, California, where its own terms of service require users of its platform to litigate any disputes.” (Musk recently said that X will move its headquarters from San Francisco to Austin, Texas.)

O’Connor’s ruling acknowledges that “when a nonresident defendant files a motion to dismiss for lack of personal jurisdiction, the burden of proof is on the plaintiff as the party seeking to invoke the district court’s jurisdiction.” In this case, O’Connor said that jurisdiction is established if the defendants “targeted the conduct that is the basis for this lawsuit at Texas.”

O’Connor ruled that the court has jurisdiction because Media Matters articles “targeted” Texas-based companies that advertised on X, specifically Oracle and AT&T, even though those companies are not parties to the lawsuit. O’Connor said the Media Matters “articles targeted, among others, Oracle, a Texas-based company that placed ads on Plaintiff’s platform… Plaintiff also alleges that this ‘crusade’ targeted its blue-chip advertisers which included Oracle and AT&T, Texas-based companies.”

O’Connor, a George W. Bush appointee, wrote that a “defendant who targets a Texas company with tortious activity has fair warning that it may be sued there.”

“This targeting of the alleged tortious acts at the headquarters of Texas-based companies is sufficient to establish specific jurisdiction in Texas… each Defendant engaged in the alleged tortious acts which targeted harm in, among other places, Texas,” he wrote.

Judge cites TV appearances

That includes Hananoki, the Media Matters reporter who wrote the articles, and Carusone. Each of those individual defendants “targeted” the conduct at Texas, O’Connor found.

“Plaintiff alleges Carusone participated in the ‘crusade’ with Hananoki and Media Matters when he appeared on television shows a number of times discussing the importance of advertisers to Plaintiff’s business model and advocating that advertisers should cease doing business with Plaintiff if there is a deluge of ‘unmoderated right-wing hatred and misinformation,'” O’Connor wrote.

Ruling that “Media Matters targeted Texas,” O’Connor wrote that the group pursued “a strategy to target Plaintiff’s blue-chip advertisers, including Oracle and AT&T, Texas-based companies; in furtherance of this strategy it published the Hananoki articles, and it published other articles pressuring the blue-chip advertisers, all to pressure blue-chip advertisers to cease doing business with Plaintiff. Finally, the inference from Media Matters’ affidavit is that Media Matters also emailed the Hananoki articles to Texans, and Plaintiff’s lawsuit arises out of this conduct.”

Media Matters also sought dismissal on the basis that X failed to state a claim. But O’Connor said that “the Court must accept all well-pleaded facts in the complaint as true and view them in the light most favorable to the plaintiff,” and he found that X “has provided sufficient allegations to survive dismissal.”

Media Matters declined to comment when contacted by Ars today.

Texas judge decides Texas is a perfectly good venue for X to sue Media Matters Read More »

us:-alaska-man-busted-with-10,000+-child-sex-abuse-images-despite-his-many-encrypted-apps

US: Alaska man busted with 10,000+ child sex abuse images despite his many encrypted apps

click here —

Encryption alone won’t save you from the feds.

Stylized illustration of a padlock.

The rise in child sexual abuse material (CSAM) has been one of the darkest Internet trends, but after years of covering CSAM cases, I’ve found that few of those arrested show deep technical sophistication. (Perhaps this is simply because the technically sophisticated are better at avoiding arrest.)

Most understand that what they are doing is illegal and that password protection is required, both for their devices and online communities. Some can also use tools like TOR (The Onion Router). And, increasingly, encrypted (or at least encrypted-capable) chat apps might be in play.

But I’ve never seen anyone who, when arrested, had three Samsung Galaxy phones filled with “tens of thousands of videos and images” depicting CSAM, all of it hidden behind a secrecy-focused, password-protected app called “Calculator Photo Vault.” Nor have I seen anyone arrested for CSAM having used all of the following:

  • Potato Chat (“Use the most advanced encryption technology to ensure information security.”)
  • Enigma (“The server only stores the encrypted message, and only the users client can decrypt it.”)
  • nandbox [presumably the Messenger app] (“Free Secured Calls & Messages,”)
  • Telegram (“To this day, we have disclosed 0 bytes of user data to third parties, including governments.”)
  • TOR (“Browse Privately. Explore Freely.”)
  • Mega NZ (“We use zero-knowledge encryption.”)
  • Web-based generative AI tools/chatbots

That’s what made this week’s indictment in Alaska of a heavy vehicle driver for the US military so unusual.

According to the government, Seth Herrera not only used all of these tools to store and download CSAM, but he also created his own—and in two disturbing varieties. First, he allegedly recorded nude minor children himself and later “zoomed in on and enhanced those images using AI-powered technology.”

Secondly, he took this imagery he had created and then “turned to AI chatbots to ensure these minor victims would be depicted as if they had engaged in the type of sexual contact he wanted to see.” In other words, he created fake AI CSAM—but using imagery of real kids.

The material was allegedly stored behind password protection on his phone(s) but also on Mega and on Telegram, where Herrera is said to have “created his own public Telegram group to store his CSAM.” He also joined “multiple CSAM-related Enigma groups” and frequented dark websites with taglines like “The Only Child Porn Site you need!”

Despite all the precautions, Herrera’s home was searched and his phones were seized by Homeland Security Investigations; he was eventually arrested on August 23. In a court filing that day, a government attorney noted that Herrera “was arrested this morning with another smartphone—the same make and model as one of his previously seized devices.”

Caught anyway

The government is cagey about how, exactly, this criminal activity was unearthed, noting only that Herrera “tried to access a link containing apparent CSAM.” Presumably, this “apparent” CSAM was a government honeypot file or web-based redirect that logged the IP address and any other relevant information of anyone who clicked on it.

In the end, given that fatal click, none of the “I’ll hide it behind an encrypted app that looks like a calculator!” technical sophistication accomplished much. Forensic reviews of Herrera’s three phones now form the primary basis for the charges against him, and Herrera himself allegedly “admitted to seeing CSAM online for the past year and a half” in an interview with the feds.

Since Herrera himself has a young daughter, and since there are “six children living within his fourplex alone” on Joint Base Elmendorf-Richardson, the government has asked a judge not to release Herrera on bail before his trial.

US: Alaska man busted with 10,000+ child sex abuse images despite his many encrypted apps Read More »

feds-to-get-early-access-to-openai,-anthropic-ai-to-test-for-doomsday-scenarios

Feds to get early access to OpenAI, Anthropic AI to test for doomsday scenarios

“Advancing the science of AI safety” —

AI companies agreed that ensuring AI safety was key to innovation.

Feds to get early access to OpenAI, Anthropic AI to test for doomsday scenarios

OpenAI and Anthropic have each signed unprecedented deals granting the US government early access to conduct safety testing on the companies’ flashiest new AI models before they’re released to the public.

According to a press release from the National Institute of Standards and Technology (NIST), the deal creates a “formal collaboration on AI safety research, testing, and evaluation with both Anthropic and OpenAI” and the US Artificial Intelligence Safety Institute.

Through the deal, the US AI Safety Institute will “receive access to major new models from each company prior to and following their public release.” This will ensure that public safety won’t depend exclusively on how the companies “evaluate capabilities and safety risks, as well as methods to mitigate those risks,” NIST said, but also on collaborative research with the US government.

The US AI Safety Institute will also be collaborating with the UK AI Safety Institute when examining models to flag potential safety risks. Both groups will provide feedback to OpenAI and Anthropic “on potential safety improvements to their models.”

NIST said that the agreements also build on voluntary AI safety commitments that AI companies made to the Biden administration to evaluate models to detect risks.

Elizabeth Kelly, director of the US AI Safety Institute, called the agreements “an important milestone” to “help responsibly steward the future of AI.”

Anthropic co-founder: AI safety “crucial” to innovation

The announcement comes as California is poised to pass one of the country’s first AI safety bills, which will regulate how AI is developed and deployed in the state.

Among the most controversial aspects of the bill is a requirement that AI companies build in a “kill switch” to stop models from introducing “novel threats to public safety and security,” especially if the model is acting “with limited human oversight, intervention, or supervision.”

Critics say the bill overlooks existing safety risks from AI—like deepfakes and election misinformation—to prioritize prevention of doomsday scenarios and could stifle AI innovation while providing little security today. They’ve urged California’s governor, Gavin Newsom, to veto the bill if it arrives at his desk, but it’s still unclear if Newsom intends to sign.

Anthropic was one of the AI companies that cautiously supported California’s controversial AI bill, Reuters reported, claiming that the potential benefits of the regulations likely outweigh the costs after a late round of amendments.

The company’s CEO, Dario Amodei, told Newsom why Anthropic supports the bill now in a letter last week, Reuters reported. He wrote that although Anthropic isn’t certain about aspects of the bill that “seem concerning or ambiguous,” Anthropic’s “initial concerns about the bill potentially hindering innovation due to the rapidly evolving nature of the field have been greatly reduced” by recent changes to the bill.

OpenAI has notably joined critics opposing California’s AI safety bill and has been called out by whistleblowers for lobbying against it.

In a letter to the bill’s co-sponsor, California Senator Scott Wiener, OpenAI’s chief strategy officer, Jason Kwon, suggested that “the federal government should lead in regulating frontier AI models to account for implications to national security and competitiveness.”

The ChatGPT maker striking a deal with the US AI Safety Institute seems in line with that thinking. As Kwon told Reuters, “We believe the institute has a critical role to play in defining US leadership in responsibly developing artificial intelligence and hope that our work together offers a framework that the rest of the world can build on.”

While some critics worry California’s AI safety bill will hamper innovation, Anthropic’s co-founder, Jack Clark, told Reuters today that “safe, trustworthy AI is crucial for the technology’s positive impact.” He confirmed that Anthropic’s “collaboration with the US AI Safety Institute” will leverage the government’s “wide expertise to rigorously test” Anthropic’s models “before widespread deployment.”

In NIST’s press release, Kelly agreed that “safety is essential to fueling breakthrough technological innovation.”

By directly collaborating with OpenAI and Anthropic, the US AI Safety Institute also plans to conduct its own research to help “advance the science of AI safety,” Kelly said.

Feds to get early access to OpenAI, Anthropic AI to test for doomsday scenarios Read More »

mpa-says-no-more-“whac-a-mole”-with-pirate-sites,-claims-it-took-down-“mothership”

MPA says no more “Whac-a-Mole” with pirate sites, claims it took down “mothership”

“We took down the mothership” —

Fmovies takedown “is a stunning victory,” MPA CEO Charles Rivkin said.

Motion Picture Association CEO Charles Rivkin gives a speech at a podium during a conference.

Enlarge / Motion Picture Association CEO Charles Rivkin speaks onstage during CinemaCon, a convention of the National Association of Theatre Owners, at Caesars Palace on April 9, 2024, in Las Vegas, Nevada.

Getty Images | Jerod Harris

A group representing major film studios said it collaborated with Vietnamese authorities to take down what it called “the largest pirate streaming operation in the world.”

Fmovies, which the film industry group also called the “world’s largest piracy ring,” is said to have drawn more than 6.7 billion visits between January 2023 and June 2024. Launched in 2016, the Hanoi-based outfit included pirate sites bflixz, flixtorz, movies7, myflixer, and aniwave.

“The takedown of Fmovies is a stunning victory for casts, crews, writers, directors, studios, and the creative community across the globe,” Motion Picture Association (MPA) CEO Charles Rivkin said today.

The industry announcement was made by the Alliance for Creativity and Entertainment (ACE), an enforcement group that was created by the MPA and has members including Amazon, Apple, Comcast, Disney, Fox, HBO, Hulu, MGM, NBCUniversal, Netflix, Paramount, Sony, and Warner Bros. In addition to leading the MPA, Rivkin is the chairman of ACE.

“With the leadership of ACE and the partnership of the Ministry of Public Security and the Hanoi Municipal Police, we are countering criminal activity, defending the safety of audiences, reducing risks posed to tens of millions of consumers, and protecting the rights and livelihoods of creators,” Rivkin said.

ACE said that Vidsrc.to, “a notorious video hosting provider operated by the same suspects,” was also taken down in an operation that affected “hundreds of additional dedicated piracy sites.”

“We took down the mothership”

Rivkin claimed that the industry action will have a major effect on availability of pirated content. “We took down the mothership here,” he told Variety. “There was a time when piracy was Whac-a-Mole… Today, we go after piracy at its root.”

Another MPA official, Chief Content Protection Officer Larissa Knapp, said the group anticipates “ongoing joint efforts with Vietnamese authorities, US Homeland Security Investigations, and the US Department of Justice International Computer Hacking and Intellectual Property (ICHIP) program to bring the criminal operators to justice.”

ACE also recently announced settlements with three US-based operators requiring them to shut down IPTV services accused of “mass copyright infringement.” ACE bills itself as the “world’s leading coalition dedicated to protecting the legal creative market and reducing digital piracy.” It works closely with the US government: The National Intellectual Property Rights Coordination Center, a US government office overseen by Immigration and Customs Enforcement, announced in 2022 that it was “embedding MPA and ACE personnel” with its team in Washington, DC.

In an April 2024 speech, Rivkin complained that American users were able to access Fmovies because of the lack of a site-blocking law. “One of the largest illegal streaming sites in the world, FMovies, sees over 160 million visits per month—and because other nations already passed site-blocking legislation, a third of that traffic still comes from the United States,” Rivkin said. In the speech, Rivkin said the MPA planned to lobby members of Congress for a law requiring Internet service providers to block piracy websites.

Film studios have also tried to force ISPs to disconnect Internet users accused of piracy. Cable firm Cox Communications recently asked the Supreme Court to overturn an appeals court ruling in a case brought by Sony, saying the ruling “would force ISPs to terminate Internet service to households or businesses based on unproven allegations of infringing activity.”

MPA says no more “Whac-a-Mole” with pirate sites, claims it took down “mothership” Read More »

telegram-ceo-charged-with-numerous-crimes-and-is-banned-from-leaving-france

Telegram CEO charged with numerous crimes and is banned from leaving France

Indictment —

Multi-billionaire must post bail of 5 million euros, report to police twice a week.

Telegram CEO Pavel Durov sitting on stage and speaking at a conference.

Enlarge / Pavel Durov, CEO and co-founder of Telegram, speaks at TechCrunch Disrupt SF 2015 on September 21, 2015, in San Francisco.

Getty Images | tSteve Jennings

Telegram CEO Pavel Durov was indicted in France today and ordered to post bail of 5 million euros. The multi-billionaire was forbidden from leaving the country and must report to police twice a week while the case continues.

Charges were detailed in a statement issued today by Paris prosecutor Laure Beccuau, which was provided to Ars. They are nearly identical to the possible charges released by Beccuau on Monday.

The first charge listed is complicity in “web-mastering an online platform in order to enable an illegal transaction in organized group.” Today’s press release said this charge carries a maximum penalty of 10 years in prison and a 500,000-euro fine.

Telegram’s alleged refusal to cooperate with law enforcement on criminal investigations resulted in a charge of “refusal to communicate, at the request of competent authorities, information or documents necessary for carrying out and operating interceptions allowed by law.”

Beccuau said there was a near-total lack of response from Telegram to requests for cooperation in cases related to crimes against minors, drug crimes, and online hate. This led authorities “to open an investigation into the possible criminal responsibility of the messaging app’s executives in the commission of these offenses,” Beccuau said, as quoted by Bloomberg.

Durov was further charged with complicity in drug trafficking and distribution of child pornography.

Cryptology-related charges

He was also charged with providing cryptology services without making required declarations to government officials. Under French law, providers of cryptology must make declarations to ANSSI, the country’s cybersecurity agency. French authorities may request that companies provide “the technical characteristics and the source code of the means of cryptology which was the subject of the declaration.”

The charges against Durov include “providing cryptology services aiming to ensure confidentiality without certified declaration,” “providing a cryptology tool not solely ensuring authentication or integrity monitoring without prior declaration,” and “importing a cryptology tool ensuring authentication or integrity monitoring without prior declaration.”

Telegram offers a mix of private messaging and social network features. Telegram messages do not have end-to-end encryption by default, but the security feature can be enabled for one-on-one conversations.

In response to Durov’s arrest, Telegram said on Sunday that it follows the law and industry standards on moderation and called it “absurd to claim that a platform or its owner are responsible for abuse of that platform.”

French authorities also reportedly issued a warrant for the arrest of Durov’s brother and fellow Telegram co-founder, Nikolai.

Telegram CEO charged with numerous crimes and is banned from leaving France Read More »

court:-section-230-doesn’t-shield-tiktok-from-blackout-challenge-death-suit

Court: Section 230 doesn’t shield TikTok from Blackout Challenge death suit

A dent in the Section 230 shield —

TikTok must face claim over For You Page recommending content that killed kids.

Court: Section 230 doesn’t shield TikTok from Blackout Challenge death suit

An appeals court has revived a lawsuit against TikTok by reversing a lower court’s ruling that Section 230 immunity shielded the short video app from liability after a child died taking part in a dangerous “Blackout Challenge.”

Several kids died taking part in the “Blackout Challenge,” which Third Circuit Judge Patty Shwartz described in her opinion as encouraging users “to choke themselves with belts, purse strings, or anything similar until passing out.”

Because TikTok promoted the challenge in children’s feeds, Tawainna Anderson counted among mourning parents who attempted to sue TikTok in 2022. Ultimately, she was told that TikTok was not responsible for recommending the video that caused the death of her daughter Nylah.

In her opinion, Shwartz wrote that Section 230 does not bar Anderson from arguing that TikTok’s algorithm amalgamates third-party videos, “which results in ‘an expressive product’ that ‘communicates to users’ [that a] curated stream of videos will be interesting to them.”

The judge cited a recent Supreme Court ruling that “held that a platform’s algorithm that reflects ‘editorial judgments’ about compiling the third-party speech it wants in the way it wants’ is the platform’s own ‘expressive product’ and is therefore protected by the First Amendment,” Shwartz wrote.

Because TikTok’s For You Page (FYP) algorithm decides which third-party speech to include or exclude and organizes content, TikTok’s algorithm counts as TikTok’s own “expressive activity.” That “expressive activity” is not protected by Section 230, which only shields platforms from liability for third-party speech, not platforms’ own speech, Shwartz wrote.

The appeals court has now remanded the case to the district court to rule on Anderson’s remaining claims.

Section 230 doesn’t permit “indifference” to child death

According to Shwartz, if Nylah had discovered the “Blackout Challenge” video by searching on TikTok, the platform would not be liable, but because she found it on her FYP, TikTok transformed into “an affirmative promoter of such content.”

Now TikTok will have to face Anderson’s claims that are “premised upon TikTok’s algorithm,” Shwartz said, as well as potentially other claims that Anderson may reraise that may be barred by Section 230. The District Court will have to determine which claims are barred by Section 230 “consistent” with the Third Circuit’s ruling, though.

Concurring in part, circuit Judge Paul Matey noted that by the time Nylah took part in the “Blackout Challenge,” TikTok knew about the dangers and “took no and/or completely inadequate action to extinguish and prevent the spread of the Blackout Challenge and specifically to prevent the Blackout Challenge from being shown to children on their” FYPs.

Matey wrote that Section 230 does not shield corporations “from virtually any claim loosely related to content posted by a third party,” as TikTok seems to believe. He encouraged a “far narrower” interpretation of Section 230 to stop companies like TikTok from reading the Communications Decency Act as permitting “casual indifference to the death of a 10-year-old girl.”

“Anderson’s estate may seek relief for TikTok’s knowing distribution and targeted recommendation of videos it knew could be harmful,” Matey wrote. That includes pursuing “claims seeking to hold TikTok liable for continuing to host the Blackout Challenge videos knowing they were causing the death of children” and “claims seeking to hold TikTok liable for its targeted recommendations of videos it knew were harmful.”

“The company may decide to curate the content it serves up to children to emphasize the lowest virtues, the basest tastes,” Matey wrote. “But it cannot claim immunity that Congress did not provide.”

Anderson’s lawyers at Jeffrey Goodman, Saltz Mongeluzzi & Bendesky PC previously provided Ars with a statement after the prior court’s ruling, indicating that parents weren’t prepared to stop fighting in 2022.

“The federal Communications Decency Act was never intended to allow social media companies to send dangerous content to children, and the Andersons will continue advocating for the protection of our children from an industry that exploits youth in the name of profits,” lawyers said.

TikTok did not immediately respond to Ars’ request to comment but previously vowed to “remain vigilant in our commitment to user safety” and “immediately remove” Blackout Challenge content “if found.”

Court: Section 230 doesn’t shield TikTok from Blackout Challenge death suit Read More »