Artificial Intelligence

cloudflare-moves-to-end-free,-endless-ai-scraping-with-one-click-blocking

Cloudflare moves to end free, endless AI scraping with one-click blocking

Cloudflare moves to end free, endless AI scraping with one-click blocking

Cloudflare announced new tools Monday that it claims will help end the era of endless AI scraping by giving all sites on its network the power to block bots in one click.

That will help stop the firehose of unrestricted AI scraping, but, perhaps even more intriguing to content creators everywhere, Cloudflare says it will also make it easier to identify which content that bots scan most, so that sites can eventually wall off access and charge bots to scrape their most valuable content. To pave the way for that future, Cloudflare is also creating a marketplace for all sites to negotiate content deals based on more granular AI audits of their sites.

These tools, Cloudflare’s blog said, give content creators “for the first time” ways “to quickly and easily understand how AI model providers are using their content, and then take control of whether and how the models are able to access it.”

That’s necessary for content creators because the rise of generative AI has made it harder to value their content, Cloudflare suggested in a longer blog explaining the tools.

Previously, sites could distinguish between approving access to helpful bots that drive traffic, like search engine crawlers, and denying access to bad bots that try to take down sites or scrape sensitive or competitive data.

But now, “Large Language Models (LLMs) and other generative tools created a murkier third category” of bots, Cloudflare said, that don’t perfectly fit in either category. They don’t “necessarily drive traffic” like a good bot, but they also don’t try to steal sensitive data like a bad bot, so many site operators don’t have a clear way to think about the “value exchange” of allowing AI scraping, Cloudflare said.

That’s a problem because enabling all scraping could hurt content creators in the long run, Cloudflare predicted.

“Many sites allowed these AI crawlers to scan their content because these crawlers, for the most part, looked like ‘good’ bots—only for the result to mean less traffic to their site as their content is repackaged in AI-written answers,” Cloudflare said.

All this unrestricted AI scraping “poses a risk to an open Internet,” Cloudflare warned, proposing that its tools could set a new industry standard for how content is scraped online.

How to block bots in one click

Increasingly, creators fighting to control what happens with their content have been pushed to either sue AI companies to block unwanted scraping, as The New York Times has, or put content behind paywalls, decreasing public access to information.

While some big publishers have been striking content deals with AI companies to license content, Cloudflare is hoping new tools will help to level the playing field for everyone. That way, “there can be a transparent exchange between the websites that want greater control over their content, and the AI model providers that require fresh data sources, so that everyone benefits,” Cloudflare said.

Today, Cloudflare site operators can stop manually blocking each AI bot one by one and instead choose to “block all AI bots in one click,” Cloudflare said.

They can do this by visiting the Bots section under the Security tab of the Cloudflare dashboard, then clicking a blue link in the top-right corner “to configure how Cloudflare’s proxy handles bot traffic,” Cloudflare said. On that screen, operators can easily “toggle the button in the ‘Block AI Scrapers and Crawlers’ card to the ‘On’ position,” blocking everything and giving content creators time to strategize what access they want to re-enable, if any.

Beyond just blocking bots, operators can also conduct AI audits, quickly analyzing which sections of their sites are scanned most by which bots. From there, operators can decide which scraping is allowed and use sophisticated controls to decide which bots can scrape which parts of their sites.

“For some teams, the decision will be to allow the bots associated with AI search engines to scan their Internet properties because those tools can still drive traffic to the site,” Cloudflare’s blog explained. “Other organizations might sign deals with a specific model provider, and they want to allow any type of bot from that provider to access their content.”

For publishers already playing whack-a-mole with bots, a key perk would be if Cloudflare’s tools allowed them to write rules to restrict certain bots that scrape sites for both “good” and “bad” purposes to keep the good and throw away the bad.

Perhaps the most frustrating bot for publishers today is the Googlebot, which scrapes sites to populate search results as well as to train AI to generate Google search AI overviews that could negatively impact traffic to source sites by summarizing content. Publishers currently have no way of opting out of training models fueling Google’s AI overviews without losing visibility in search results, and Cloudflare’s tools won’t be able to get publishers out of that uncomfortable position, Cloudflare CEO Matthew Prince confirmed to Ars.

For any site operators tempted to toggle off all AI scraping, blocking the Googlebot from scraping and inadvertently causing dips in traffic may be a compelling reason not to use Cloudflare’s one-click solution.

However, Prince expects “that Google’s practices over the long term won’t be sustainable” and “that Cloudflare will be a part of getting Google and other folks that are like Google” to give creators “much more granular control over” how bots like the Googlebot scrape the web to train AI.

Prince told Ars that while Google solves its “philosophical” internal question of whether the Googlebot’s scraping is for search or for AI, a technical solution to block one bot from certain kinds of scraping will likely soon emerge. And in the meantime, “there can also be a legal solution” that “can rely on contract law” based on improving sites’ terms of service.

Not every site would, of course, be able to afford a lawsuit to challenge AI scraping, but to help creators better defend themselves, Cloudflare drafted “model terms of use that every content creator can add to their sites to legally protect their rights as sites gain more control over AI scraping.” With these terms, sites could perhaps more easily dispute any restricted scraping discovered through Cloudflare’s analytics tools.

“One way or another, Google is going to get forced to be more fine-grained here,” Prince predicted.

Cloudflare moves to end free, endless AI scraping with one-click blocking Read More »

how-to-stop-linkedin-from-training-ai-on-your-data

How to stop LinkedIn from training AI on your data

Better to beg for forgiveness than ask for permission? —

LinkedIn limits opt-outs to future training, warns AI models may spout personal data.

How to stop LinkedIn from training AI on your data

LinkedIn admitted Wednesday that it has been training its own AI on many users’ data without seeking consent. Now there’s no way for users to opt out of training that has already occurred, as LinkedIn limits opt-out to only future AI training.

In a blog detailing updates coming on November 20, LinkedIn general counsel Blake Lawit confirmed that LinkedIn’s user agreement and privacy policy will be changed to better explain how users’ personal data powers AI on the platform.

Under the new privacy policy, LinkedIn now informs users that “we may use your personal data… [to] develop and train artificial intelligence (AI) models, develop, provide, and personalize our Services, and gain insights with the help of AI, automated systems, and inferences, so that our Services can be more relevant and useful to you and others.”

An FAQ explained that the personal data could be collected any time a user interacts with generative AI or other AI features, as well as when a user composes a post, changes their preferences, provides feedback to LinkedIn, or uses the platform for any amount of time.

That data is then stored until the user deletes the AI-generated content. LinkedIn recommends that users use its data access tool if they want to delete or request to delete data collected about past LinkedIn activities.

LinkedIn’s AI models powering generative AI features “may be trained by LinkedIn or another provider,” such as Microsoft, which provides some AI models through its Azure OpenAI service, the FAQ said.

A potentially major privacy risk for users, LinkedIn’s FAQ noted, is that users who “provide personal data as an input to a generative AI powered feature” could end up seeing their “personal data being provided as an output.”

LinkedIn claims that it “seeks to minimize personal data in the data sets used to train the models,” relying on “privacy enhancing technologies to redact or remove personal data from the training dataset.”

While Lawit’s blog avoids clarifying if data already collected can be removed from AI training data sets, the FAQ affirmed that users who automatically opted in to sharing personal data for AI training can only opt out of the invasive data collection “going forward.”

Opting out “does not affect training that has already taken place,” the FAQ said.

A LinkedIn spokesperson told Ars that it “benefits all members” to be opted in to AI training “by default.”

“People can choose to opt out, but they come to LinkedIn to be found for jobs and networking and generative AI is part of how we are helping professionals with that change,” LinkedIn’s spokesperson said.

By allowing opt-outs of future AI training, LinkedIn’s spokesperson additionally claimed that the platform is giving “people using LinkedIn even more choice and control when it comes to how we use data to train our generative AI technology.”

How to opt out of AI training on LinkedIn

Users can opt out of AI training by navigating to the “Data privacy” section in their account settings, then turning off the option allowing collection of “data for generative AI improvement” that LinkedIn otherwise automatically turns on for most users.

The only exception is for users in the European Economic Area or Switzerland, who are protected by stricter privacy laws that either require consent from platforms to collect personal data or for platforms to justify the data collection as a legitimate interest. Those users will not see an option to opt out, because they were never opted in, LinkedIn repeatedly confirmed.

Additionally, users can “object to the use of their personal data for training” generative AI models not used to generate LinkedIn content—such as models used for personalization or content moderation purposes, The Verge noted—by submitting the LinkedIn Data Processing Objection Form.

Last year, LinkedIn shared AI principles, promising to take “meaningful steps to reduce the potential risks of AI.”

One risk that the updated user agreement specified is that using LinkedIn’s generative features to help populate a profile or generate suggestions when writing a post could generate content that “might be inaccurate, incomplete, delayed, misleading or not suitable for your purposes.”

Users are advised that they are responsible for avoiding sharing misleading information or otherwise spreading AI-generated content that may violate LinkedIn’s community guidelines. And users are additionally warned to be cautious when relying on any information shared on the platform.

“Like all content and other information on our Services, regardless of whether it’s labeled as created by ‘AI,’ be sure to carefully review before relying on it,” LinkedIn’s user agreement says.

Back in 2023, LinkedIn claimed that it would always “seek to explain in clear and simple ways how our use of AI impacts people,” because users’ “understanding of AI starts with transparency.”

Legislation like the European Union’s AI Act and the GDPR—especially with its strong privacy protections—if enacted elsewhere, would lead to fewer shocks to unsuspecting users. That would put all companies and their users on equal footing when it comes to training AI models and result in fewer nasty surprises and angry customers.

How to stop LinkedIn from training AI on your data Read More »

creator-of-fake-kamala-harris-video-musk-boosted-sues-calif.-over-deepfake-laws

Creator of fake Kamala Harris video Musk boosted sues Calif. over deepfake laws

Creator of fake Kamala Harris video Musk boosted sues Calif. over deepfake laws

After California passed laws cracking down on AI-generated deepfakes of election-related content, a popular conservative influencer promptly sued, accusing California of censoring protected speech, including satire and parody.

In his complaint, Christopher Kohls—who is known as “Mr Reagan” on YouTube and X (formerly Twitter)—said that he was suing “to defend all Americans’ right to satirize politicians.” He claimed that California laws, AB 2655 and AB 2839, were urgently passed after X owner Elon Musk shared a partly AI-generated parody video on the social media platform that Kohls created to “lampoon” presidential hopeful Kamala Harris.

AB 2655, known as the “Defending Democracy from Deepfake Deception Act,” prohibits creating “with actual malice” any “materially deceptive audio or visual media of a candidate for elective office with the intent to injure the candidate’s reputation or to deceive a voter into voting for or against the candidate, within 60 days of the election.” It requires social media platforms to block or remove any reported deceptive material and label “certain additional content” deemed “inauthentic, fake, or false” to prevent election interference.

The other law at issue, AB 2839, titled “Elections: deceptive media in advertisements,” bans anyone from “knowingly distributing an advertisement or other election communication” with “malice” that “contains certain materially deceptive content” within 120 days of an election in California and, in some cases, within 60 days after an election.

Both bills were signed into law on September 17, and Kohls filed his complaint that day, alleging that both must be permanently blocked as unconstitutional.

Elon Musk called out for boosting Kohls’ video

Kohls’ video that Musk shared seemingly would violate these laws by using AI to make Harris appear to give speeches that she never gave. The manipulated audio sounds like Harris, who appears to be mocking herself as a “diversity hire” and claiming that any critics must be “sexist and racist.”

“Making fun of presidential candidates and other public figures is an American pastime,” Kohls said, defending his parody video. He pointed to a long history of political cartoons and comedic impressions of politicians, claiming that “AI-generated commentary, though a new mode of speech, falls squarely within this tradition.”

While Kohls’ post was clearly marked “parody” in the YouTube title and in his post on X, that “parody” label did not carry over when Musk re-posted the video. This lack of a parody label on Musk’s post—which got approximately 136 million views, roughly twice as many as Kohls’ post—set off California governor Gavin Newsom, who immediately blasted Musk’s post and vowed on X to make content like Kohls’ video “illegal.”

In response to Newsom, Musk poked fun at the governor, posting that “I checked with renowned world authority, Professor Suggon Deeznutz, and he said parody is legal in America.” For his part, Kohls put up a second parody video targeting Harris, calling Newsom a “bully” in his complaint and claiming that he had to “punch back.”

Shortly after these online exchanges, California lawmakers allegedly rushed to back the governor, Kohls’ complaint said. They allegedly amended the deepfake bills to ensure that Kohls’ video would be banned when the bills were signed into law, replacing a broad exception for satire in one law with a narrower safe harbor that Kohls claimed would chill humorists everywhere.

“For videos,” his complaint said, disclaimers required under AB 2839 must “appear for the duration of the video” and “must be in a font size ‘no smaller than the largest font size of other text appearing in the visual media.'” For a satirist like Kohls who uses large fonts to optimize videos for mobile, this “would require the disclaimer text to be so large that it could not fit on the screen,” his complaint said.

On top of seeming impractical, the disclaimers would “fundamentally” alter “the nature of his message” by removing the comedic effect for viewers by distracting from what allegedly makes the videos funny—”the juxtaposition of over-the-top statements by the AI-generated ‘narrator,’ contrasted with the seemingly earnest style of the video as if it were a genuine campaign ad,” Kohls’ complaint alleged.

Imagine watching Saturday Night Live with prominent disclaimers taking up your TV screen, his complaint suggested.

It’s possible that Kohls’ concerns about AB 2839 are unwarranted. Newsom spokesperson Izzy Gardon told Politico that Kohls’ parody label on X was good enough to clear him of liability under the law.

“Requiring them to use the word ‘parody’ on the actual video avoids further misleading the public as the video is shared across the platform,” Gardon said. “It’s unclear why this conservative activist is suing California. This new disclosure law for election misinformation isn’t any more onerous than laws already passed in other states, including Alabama.”

Creator of fake Kamala Harris video Musk boosted sues Calif. over deepfake laws Read More »

ai-ruling-on-jobless-claims-could-make-mistakes-courts-can’t-undo,-experts-warn

AI ruling on jobless claims could make mistakes courts can’t undo, experts warn

AI ruling on jobless claims could make mistakes courts can’t undo, experts warn

Nevada will soon become the first state to use AI to help speed up the decision-making process when ruling on appeals that impact people’s unemployment benefits.

The state’s Department of Employment, Training, and Rehabilitation (DETR) agreed to pay Google $1,383,838 for the AI technology, a 2024 budget document shows, and it will be launched within the “next several months,” Nevada officials told Gizmodo.

Nevada’s first-of-its-kind AI will rely on a Google cloud service called Vertex AI Studio. Connecting to Google’s servers, the state will fine-tune the AI system to only reference information from DETR’s database, which officials think will ensure its decisions are “more tailored” and the system provides “more accurate results,” Gizmodo reported.

Under the contract, DETR will essentially transfer data from transcripts of unemployment appeals hearings and rulings, after which Google’s AI system will process that data, upload it to the cloud, and then compare the information to previous cases.

In as little as five minutes, the AI will issue a ruling that would’ve taken a state employee about three hours to reach without using AI, DETR’s information technology administrator, Carl Stanfield, told The Nevada Independent. That’s highly valuable to Nevada, which has a backlog of more than 40,000 appeals stemming from a pandemic-related spike in unemployment claims while dealing with “unforeseen staffing shortages” that DETR reported in July.

“The time saving is pretty phenomenal,” Stanfield said.

As a safeguard, the AI’s determination is then reviewed by a state employee to hopefully catch any mistakes, biases, or perhaps worse, hallucinations where the AI could possibly make up facts that could impact the outcome of their case.

Google’s spokesperson Ashley Simms told Gizmodo that the tech giant will work with the state to “identify and address any potential bias” and to “help them comply with federal and state requirements.” According to the state’s AI guidelines, the agency must prioritize ethical use of the AI system, “avoiding biases and ensuring fairness and transparency in decision-making processes.”

If the reviewer accepts the AI ruling, they’ll sign off on it and issue the decision. Otherwise, the reviewer will edit the decision and submit feedback so that DETR can investigate what went wrong.

Gizmodo noted that this novel use of AI “represents a significant experiment by state officials and Google in allowing generative AI to influence a high-stakes government decision—one that could put thousands of dollars in unemployed Nevadans’ pockets or take it away.”

Google declined to comment on whether more states are considering using AI to weigh jobless claims.

AI ruling on jobless claims could make mistakes courts can’t undo, experts warn Read More »

cops-lure-pedophiles-with-ai-pics-of-teen-girl.-ethical-triumph-or-new-disaster?

Cops lure pedophiles with AI pics of teen girl. Ethical triumph or new disaster?

Who is she? —

New Mexico sued Snapchat after using AI to reveal child safety risks.

Cops lure pedophiles with AI pics of teen girl. Ethical triumph or new disaster?

Aurich Lawson | Getty Images

Cops are now using AI to generate images of fake kids, which are helping them catch child predators online, a lawsuit filed by the state of New Mexico against Snapchat revealed this week.

According to the complaint, the New Mexico Department of Justice launched an undercover investigation in recent months to prove that Snapchat “is a primary social media platform for sharing child sexual abuse material (CSAM)” and sextortion of minors, because its “algorithm serves up children to adult predators.”

As part of their probe, an investigator “set up a decoy account for a 14-year-old girl, Sexy14Heather.”

  • An AI-generated image of “Sexy14Heather” included in the New Mexico complaint.

  • An image of a Snapchat avatar for “Sexy14Heather” included in the New Mexico complaint.

Despite Snapchat setting the fake minor’s profile to private and the account not adding any followers, “Heather” was soon recommended widely to “dangerous accounts, including ones named ‘child.rape’ and ‘pedo_lover10,’ in addition to others that are even more explicit,” the New Mexico DOJ said in a press release.

And after “Heather” accepted a follow request from just one account, the recommendations got even worse. “Snapchat suggested over 91 users, including numerous adult users whose accounts included or sought to exchange sexually explicit content,” New Mexico’s complaint alleged.

“Snapchat is a breeding ground for predators to collect sexually explicit images of children and to find, groom, and extort them,” New Mexico’s complaint alleged.

Posing as “Sexy14Heather,” the investigator swapped messages with adult accounts, including users who “sent inappropriate messages and explicit photos.” In one exchange with a user named “50+ SNGL DAD 4 YNGR,” the fake teen “noted her age, sent a photo, and complained about her parents making her go to school,” prompting the user to send “his own photo” as well as sexually suggestive chats. Other accounts asked “Heather” to “trade presumably explicit content,” and several “attempted to coerce the underage persona into sharing CSAM,” the New Mexico DOJ said.

“Heather” also tested out Snapchat’s search tool, finding that “even though she used no sexually explicit language, the algorithm must have determined that she was looking for CSAM” when she searched for other teen users. It “began recommending users associated with trading” CSAM, including accounts with usernames such as “naughtypics,” “addfortrading,” “teentr3de,” “gayhorny13yox,” and “teentradevirgin,” the investigation found, “suggesting that these accounts also were involved in the dissemination of CSAM.”

This novel use of AI was prompted after Albuquerque police indicted a man, Alejandro Marquez, who pled guilty and was sentenced to 18 years for raping an 11-year-old girl he met through Snapchat’s Quick Add feature in 2022, New Mexico’s complaint said. More recently, the New Mexico complaint said, an Albuquerque man, Jeremy Guthrie, was arrested and sentenced this summer for “raping a 12-year-old girl who he met and cultivated over Snapchat.”

In the past, police have posed as kids online to catch child predators using photos of younger-looking adult women or even younger photos of police officers. Using AI-generated images could be considered a more ethical way to conduct these stings, a lawyer specializing in sex crimes, Carrie Goldberg, told Ars, because “an AI decoy profile is less problematic than using images of an actual child.”

But using AI could complicate investigations and carry its own ethical concerns, Goldberg warned, as child safety experts and law enforcement warn that the Internet is increasingly swamped with AI-generated CSAM.

“In terms of AI being used for entrapment, defendants can defend themselves if they say the government induced them to commit a crime that they were not already predisposed to commit,” Goldberg told Ars. “Of course, it would be ethically concerning if the government were to create deepfake AI child sexual abuse material (CSAM), because those images are illegal, and we don’t want more CSAM in circulation.”

Experts have warned that AI image generators should never be trained on datasets that combine images of real kids with explicit content to avoid any instances of AI-generated CSAM, which is particularly harmful when it appears to depict a real kid or an actual victim of child abuse.

In the New Mexico complaint, only one AI-generated image is included, so it’s unclear how widely the state’s DOJ is using AI or if cops are possibly using more advanced methods to generate multiple images of the same fake kid. It’s also unclear what ethical concerns were weighed before cops began using AI decoys.

The New Mexico DOJ did not respond to Ars’ request for comment.

Goldberg told Ars that “there ought to be standards within law enforcement with how to use AI responsibly,” warning that “we are likely to see more entrapment defenses centered around AI if the government is using the technology in a manipulative way to pressure somebody into committing a crime.”

Cops lure pedophiles with AI pics of teen girl. Ethical triumph or new disaster? Read More »

doj-subpoenas-nvidia-in-deepening-ai-antitrust-probe,-report-says

DOJ subpoenas Nvidia in deepening AI antitrust probe, report says

DOJ subpoenas Nvidia in deepening AI antitrust probe, report says

The Department of Justice is reportedly deepening its probe into Nvidia. Officials have moved on from merely questioning competitors to subpoenaing Nvidia and other tech companies for evidence that could substantiate allegations that Nvidia is abusing its “dominant position in AI computing,” Bloomberg reported.

When news of the DOJ’s probe into the trillion-dollar company was first reported in June, Fast Company reported that scrutiny was intensifying merely because Nvidia was estimated to control “as much as 90 percent of the market for chips” capable of powering AI models. Experts told Fast Company that the DOJ probe might even be good for Nvidia’s business, noting that the market barely moved when the probe was first announced.

But the market’s confidence seemed to be shaken a little more on Tuesday, when Nvidia lost a “record-setting $279 billion” in market value following Bloomberg’s report. Nvidia’s losses became “the biggest single-day market-cap decline on record,” TheStreet reported.

People close to the DOJ’s investigation told Bloomberg that the DOJ’s “legally binding requests” require competitors “to provide information” on Nvidia’s suspected anticompetitive behaviors as a “dominant provider of AI processors.”

One concern is that Nvidia may be giving “preferential supply and pricing to customers who use its technology exclusively or buy its complete systems,” sources told Bloomberg. The DOJ is also reportedly probing Nvidia’s acquisition of RunAI—suspecting the deal may lock RunAI customers into using Nvidia chips.

Bloomberg’s report builds on a report last month from The Information that said that Advanced Micro Devices Inc. (AMD) and other Nvidia rivals were questioned by the DOJ—as well as third parties who could shed light on whether Nvidia potentially abused its market dominance in AI chips to pressure customers into buying more products.

According to Bloomberg’s sources, the DOJ is worried that “Nvidia is making it harder to switch to other suppliers and penalizes buyers that don’t exclusively use its artificial intelligence chips.”

In a statement to Bloomberg, Nvidia insisted that “Nvidia wins on merit, as reflected in our benchmark results and value to customers, who can choose whatever solution is best for them.” Additionally, Bloomberg noted that following a chip shortage in 2022, Nvidia CEO Jensen Huang has said that his company strives to prevent stockpiling of Nvidia’s coveted AI chips by prioritizing customers “who can make use of his products in ready-to-go data centers.”

Potential threats to Nvidia’s dominance

Despite the slump in shares, Nvidia’s market dominance seems unlikely to wane any time soon after its stock more than doubled this year. In an SEC filing this year, Nvidia bragged that its “accelerated computing ecosystem is bringing AI to every enterprise” with an “ecosystem” spanning “nearly 5 million developers and 40,000 companies.” Nvidia specifically highlighted that “more than 1,600 generative AI companies are building on Nvidia,” and according to Bloomberg, Nvidia will close out 2024 with more profits than the total sales of its closest competitor, AMD.

After the DOJ’s most recent big win, which successfully proved that Google has a monopoly on search, the DOJ appears intent on getting ahead of any tech companies’ ambitions to seize monopoly power and essentially become the Google of the AI industry. In June, DOJ antitrust chief Jonathan Kanter confirmed to the Financial Times that the DOJ is examining “monopoly choke points and the competitive landscape” in AI beyond just scrutinizing Nvidia.

According to Kanter, the DOJ is scrutinizing all aspects of the AI industry—”everything from computing power and the data used to train large language models, to cloud service providers, engineering talent and access to essential hardware such as graphics processing unit chips.” But in particular, the DOJ appears concerned that GPUs like Nvidia’s advanced AI chips remain a “scarce resource.” Kanter told the Financial Times that an “intervention” in “real time” to block a potential monopoly could be “the most meaningful intervention” and the least “invasive” as the AI industry grows.

DOJ subpoenas Nvidia in deepening AI antitrust probe, report says Read More »

cops’-favorite-face-image-search-engine-fined-$33m-for-privacy-violation

Cops’ favorite face image search engine fined $33M for privacy violation

Cops’ favorite face image search engine fined $33M for privacy violation

A controversial facial recognition tech company behind a vast face image search engine widely used by cops has been fined approximately $33 million in the Netherlands for serious data privacy violations.

According to the Dutch Data Protection Authority (DPA), Clearview AI “built an illegal database with billions of photos of faces” by crawling the web and without gaining consent, including from people in the Netherlands.

Clearview AI’s technology—which has been banned in some US cities over concerns that it gives law enforcement unlimited power to track people in their daily lives—works by pulling in more than 40 billion face images from the web without setting “any limitations in terms of geographical location or nationality,” the Dutch DPA found. Perhaps most concerning, the Dutch DPA said, Clearview AI also provides “facial recognition software for identifying children,” therefore indiscriminately processing personal data of minors.

Training on the face image data, the technology then makes it possible to upload a photo of anyone and search for matches on the Internet. People appearing in search results, the Dutch DPA found, can be “unambiguously” identified. Billed as a public safety resource accessible only by law enforcement, Clearview AI’s face database casts too wide a net, the Dutch DPA said, with the majority of people pulled into the tool likely never becoming subject to a police search.

“The processing of personal data is not only complex and extensive, it moreover offers Clearview’s clients the opportunity to go through data about individual persons and obtain a detailed picture of the lives of these individual persons,” the Dutch DPA said. “These processing operations therefore are highly invasive for data subjects.”

Clearview AI had no legitimate interest under the European Union’s General Data Protection Regulation (GDPR) for the company’s invasive data collection, Dutch DPA Chairman Aleid Wolfsen said in a press release. The Dutch official likened Clearview AI’s sprawling overreach to “a doom scenario from a scary film,” while emphasizing in his decision that Clearview AI has not only stopped responding to any requests to access or remove data from citizens in the Netherlands, but across the EU.

“Facial recognition is a highly intrusive technology that you cannot simply unleash on anyone in the world,” Wolfsen said. “If there is a photo of you on the Internet—and doesn’t that apply to all of us?—then you can end up in the database of Clearview and be tracked.”

To protect Dutch citizens’ privacy, the Dutch DPA imposed a roughly $33 million fine that could go up by about $5.5 million if Clearview AI does not follow orders on compliance. Any Dutch businesses attempting to use Clearview AI services could also face “hefty fines,” the Dutch DPA warned, as that “is also prohibited” under the GDPR.

Clearview AI was given three months to appoint a representative in the EU to stop processing personal data—including sensitive biometric data—in the Netherlands and to update its privacy policies to inform users in the Netherlands of their rights under the GDPR. But the company only has one month to resume processing requests for data access or removals from people in the Netherlands who otherwise find it “impossible” to exercise their rights to privacy, the Dutch DPA’s decision said.

It appears that Clearview AI has no intentions to comply, however. Jack Mulcaire, the chief legal officer for Clearview AI, confirmed to Ars that the company maintains that it is not subject to the GDPR.

“Clearview AI does not have a place of business in the Netherlands or the EU, it does not have any customers in the Netherlands or the EU, and does not undertake any activities that would otherwise mean it is subject to the GDPR,” Mulcaire said. “This decision is unlawful, devoid of due process and is unenforceable.”

But the Dutch DPA found that GDPR applies to Clearview AI because it gathers personal information about Dutch citizens without their consent and without ever alerting users to the data collection at any point.

“People who are in the database also have the right to access their data,” the Dutch DPA said. “This means that Clearview has to show people which data the company has about them, if they ask for this. But Clearview does not cooperate in requests for access.”

Dutch DPA vows to investigate Clearview AI execs

In the press release, Wolfsen said that the Dutch DPA has “to draw a very clear line” underscoring the “incorrect use of this sort of technology” after Clearview AI refused to change its data collection practices following fines in other parts of the European Union, including Italy and Greece.

While Wolfsen acknowledged that Clearview AI could be used to enhance police investigations, he said that the technology would be more appropriate if it was being managed by law enforcement “in highly exceptional cases only” and not indiscriminately by a private company.

“The company should never have built the database and is insufficiently transparent,” the Dutch DPA said.

Although Clearview AI appears ready to defend against the fine, the Dutch DPA said that the company failed to object to the decision within the provided six-week timeframe and therefore cannot appeal the decision.

Further, the Dutch DPA confirmed that authorities are “looking for ways to make sure that Clearview stops the violations” beyond the fines, including by “investigating if the directors of the company can be held personally responsible for the violations.”

Wolfsen claimed that such “liability already exists if directors know that the GDPR is being violated, have the authority to stop that, but omit to do so, and in this way consciously accept those violations.”

Cops’ favorite face image search engine fined $33M for privacy violation Read More »

nonprofit-scrubs-illegal-content-from-controversial-ai-training-dataset

Nonprofit scrubs illegal content from controversial AI training dataset

Nonprofit scrubs illegal content from controversial AI training dataset

After Stanford Internet Observatory researcher David Thiel found links to child sexual abuse materials (CSAM) in an AI training dataset tainting image generators, the controversial dataset was immediately taken down in 2023.

Now, the LAION (Large-scale Artificial Intelligence Open Network) team has released a scrubbed version of the LAION-5B dataset called Re-LAION-5B and claimed that it “is the first web-scale, text-link to images pair dataset to be thoroughly cleaned of known links to suspected CSAM.”

To scrub the dataset, LAION partnered with the Internet Watch Foundation (IWF) and the Canadian Center for Child Protection (C3P) to remove 2,236 links that matched with hashed images in the online safety organizations’ databases. Removals include all the links flagged by Thiel, as well as content flagged by LAION’s partners and other watchdogs, like Human Rights Watch, which warned of privacy issues after finding photos of real kids included in the dataset without their consent.

In his study, Thiel warned that “the inclusion of child abuse material in AI model training data teaches tools to associate children in illicit sexual activity and uses known child abuse images to generate new, potentially realistic child abuse content.”

Thiel urged LAION and other researchers scraping the Internet for AI training data that a new safety standard was needed to better filter out not just CSAM, but any explicit imagery that could be combined with photos of children to generate CSAM. (Recently, the US Department of Justice pointedly said that “CSAM generated by AI is still CSAM.”)

While LAION’s new dataset won’t alter models that were trained on the prior dataset, LAION claimed that Re-LAION-5B sets “a new safety standard for cleaning web-scale image-link datasets.” Where before illegal content “slipped through” LAION’s filters, the researchers have now developed an improved new system “for identifying and removing illegal content,” LAION’s blog said.

Thiel told Ars that he would agree that LAION has set a new safety standard with its latest release, but “there are absolutely ways to improve it.” However, “those methods would require possession of all original images or a brand new crawl,” and LAION’s post made clear that it only utilized image hashes and did not conduct a new crawl that could have risked pulling in more illegal or sensitive content. (On Threads, Thiel shared more in-depth impressions of LAION’s effort to clean the dataset.)

LAION warned that “current state-of-the-art filters alone are not reliable enough to guarantee protection from CSAM in web scale data composition scenarios.”

“To ensure better filtering, lists of hashes of suspected links or images created by expert organizations (in our case, IWF and C3P) are suitable choices,” LAION’s blog said. “We recommend research labs and any other organizations composing datasets from the public web to partner with organizations like IWF and C3P to obtain such hash lists and use those for filtering. In the longer term, a larger common initiative can be created that makes such hash lists available for the research community working on dataset composition from web.”

According to LAION, the bigger concern is that some links to known CSAM scraped into a 2022 dataset are still active more than a year later.

“It is a clear hint that law enforcement bodies have to intensify the efforts to take down domains that host such image content on public web following information and recommendations by organizations like IWF and C3P, making it a safer place, also for various kinds of research related activities,” LAION’s blog said.

HRW researcher Hye Jung Han praised LAION for removing sensitive data that she flagged, while also urging more interventions.

“LAION’s responsive removal of some children’s personal photos from their dataset is very welcome, and will help to protect these children from their likenesses being misused by AI systems,” Han told Ars. “It’s now up to governments to pass child data protection laws that would protect all children’s privacy online.”

Although LAION’s blog said that the content removals represented an “upper bound” of CSAM that existed in the initial dataset, AI specialist and Creative.AI co-founder Alex Champandard told Ars that he’s skeptical that all CSAM was removed.

“They only filter out previously identified CSAM, which is only a partial solution,” Champandard told Ars. “Statistically speaking, most instances of CSAM have likely never been reported nor investigated by C3P or IWF. A more reasonable estimate of the problem is about 25,000 instances of things you’d never want to train generative models on—maybe even 50,000.”

Champandard agreed with Han that more regulations are needed to protect people from AI harms when training data is scraped from the web.

“There’s room for improvement on all fronts: privacy, copyright, illegal content, etc.,” Champandard said. Because “there are too many data rights being broken with such web-scraped datasets,” Champandard suggested that datasets like LAION’s won’t “stand the test of time.”

“LAION is simply operating in the regulatory gap and lag in the judiciary system until policymakers realize the magnitude of the problem,” Champandard said.

Nonprofit scrubs illegal content from controversial AI training dataset Read More »

feds-to-get-early-access-to-openai,-anthropic-ai-to-test-for-doomsday-scenarios

Feds to get early access to OpenAI, Anthropic AI to test for doomsday scenarios

“Advancing the science of AI safety” —

AI companies agreed that ensuring AI safety was key to innovation.

Feds to get early access to OpenAI, Anthropic AI to test for doomsday scenarios

OpenAI and Anthropic have each signed unprecedented deals granting the US government early access to conduct safety testing on the companies’ flashiest new AI models before they’re released to the public.

According to a press release from the National Institute of Standards and Technology (NIST), the deal creates a “formal collaboration on AI safety research, testing, and evaluation with both Anthropic and OpenAI” and the US Artificial Intelligence Safety Institute.

Through the deal, the US AI Safety Institute will “receive access to major new models from each company prior to and following their public release.” This will ensure that public safety won’t depend exclusively on how the companies “evaluate capabilities and safety risks, as well as methods to mitigate those risks,” NIST said, but also on collaborative research with the US government.

The US AI Safety Institute will also be collaborating with the UK AI Safety Institute when examining models to flag potential safety risks. Both groups will provide feedback to OpenAI and Anthropic “on potential safety improvements to their models.”

NIST said that the agreements also build on voluntary AI safety commitments that AI companies made to the Biden administration to evaluate models to detect risks.

Elizabeth Kelly, director of the US AI Safety Institute, called the agreements “an important milestone” to “help responsibly steward the future of AI.”

Anthropic co-founder: AI safety “crucial” to innovation

The announcement comes as California is poised to pass one of the country’s first AI safety bills, which will regulate how AI is developed and deployed in the state.

Among the most controversial aspects of the bill is a requirement that AI companies build in a “kill switch” to stop models from introducing “novel threats to public safety and security,” especially if the model is acting “with limited human oversight, intervention, or supervision.”

Critics say the bill overlooks existing safety risks from AI—like deepfakes and election misinformation—to prioritize prevention of doomsday scenarios and could stifle AI innovation while providing little security today. They’ve urged California’s governor, Gavin Newsom, to veto the bill if it arrives at his desk, but it’s still unclear if Newsom intends to sign.

Anthropic was one of the AI companies that cautiously supported California’s controversial AI bill, Reuters reported, claiming that the potential benefits of the regulations likely outweigh the costs after a late round of amendments.

The company’s CEO, Dario Amodei, told Newsom why Anthropic supports the bill now in a letter last week, Reuters reported. He wrote that although Anthropic isn’t certain about aspects of the bill that “seem concerning or ambiguous,” Anthropic’s “initial concerns about the bill potentially hindering innovation due to the rapidly evolving nature of the field have been greatly reduced” by recent changes to the bill.

OpenAI has notably joined critics opposing California’s AI safety bill and has been called out by whistleblowers for lobbying against it.

In a letter to the bill’s co-sponsor, California Senator Scott Wiener, OpenAI’s chief strategy officer, Jason Kwon, suggested that “the federal government should lead in regulating frontier AI models to account for implications to national security and competitiveness.”

The ChatGPT maker striking a deal with the US AI Safety Institute seems in line with that thinking. As Kwon told Reuters, “We believe the institute has a critical role to play in defining US leadership in responsibly developing artificial intelligence and hope that our work together offers a framework that the rest of the world can build on.”

While some critics worry California’s AI safety bill will hamper innovation, Anthropic’s co-founder, Jack Clark, told Reuters today that “safe, trustworthy AI is crucial for the technology’s positive impact.” He confirmed that Anthropic’s “collaboration with the US AI Safety Institute” will leverage the government’s “wide expertise to rigorously test” Anthropic’s models “before widespread deployment.”

In NIST’s press release, Kelly agreed that “safety is essential to fueling breakthrough technological innovation.”

By directly collaborating with OpenAI and Anthropic, the US AI Safety Institute also plans to conduct its own research to help “advance the science of AI safety,” Kelly said.

Feds to get early access to OpenAI, Anthropic AI to test for doomsday scenarios Read More »

artists-claim-“big”-win-in-copyright-suit-fighting-ai-image-generators

Artists claim “big” win in copyright suit fighting AI image generators

Back to the drawing board —

Artists prepare to take on AI image generators as copyright suit proceeds

Artists claim “big” win in copyright suit fighting AI image generators

Artists defending a class-action lawsuit are claiming a major win this week in their fight to stop the most sophisticated AI image generators from copying billions of artworks to train AI models and replicate their styles without compensating artists.

In an order on Monday, US district judge William Orrick denied key parts of motions to dismiss from Stability AI, Midjourney, Runway AI, and DeviantArt. The court will now allow artists to proceed with discovery on claims that AI image generators relying on Stable Diffusion violate both the Copyright Act and the Lanham Act, which protects artists from commercial misuse of their names and unique styles.

“We won BIG,” an artist plaintiff, Karla Ortiz, wrote on X (formerly Twitter), celebrating the order. “Not only do we proceed on our copyright claims,” but “this order also means companies who utilize” Stable Diffusion models and LAION-like datasets that scrape artists’ works for AI training without permission “could now be liable for copyright infringement violations, amongst other violations.”

Lawyers for the artists, Joseph Saveri and Matthew Butterick, told Ars that artists suing “consider the Court’s order a significant step forward for the case,” as “the Court allowed Plaintiffs’ core copyright-infringement claims against all four defendants to proceed.”

Stability AI was the only company that responded to Ars’ request to comment, but it declined to comment.

Artists prepare to defend their livelihoods from AI

To get to this stage of the suit, artists had to amend their complaint to better explain exactly how AI image generators work to allegedly train on artists’ images and copy artists’ styles.

For example, they were told that if they “contend Stable Diffusion contains ‘compressed copies’ of the Training Images, they need to define ‘compressed copies’ and explain plausible facts in support. And if plaintiffs’ compressed copies theory is based on a contention that Stable Diffusion contains mathematical or statistical methods that can be carried out through algorithms or instructions in order to reconstruct the Training Images in whole or in part to create the new Output Images, they need to clarify that and provide plausible facts in support,” Orrick wrote.

To keep their fight alive, the artists pored through academic articles to support their arguments that “Stable Diffusion is built to a significant extent on copyrighted works and that the way the product operates necessarily invokes copies or protected elements of those works.” Orrick agreed that their amended complaint made plausible inferences that “at this juncture” is enough to support claims “that Stable Diffusion by operation by end users creates copyright infringement and was created to facilitate that infringement by design.”

“Specifically, the Court found Plaintiffs’ theory that image-diffusion models like Stable Diffusion contain compressed copies of their datasets to be plausible,” Saveri and Butterick’s statement to Ars said. “The Court also found it plausible that training, distributing, and copying such models constitute acts of copyright infringement.”

Not all of the artists’ claims survived, with Orrick granting motions to dismiss claims alleging that AI companies removed content management information from artworks in violation of the Digital Millennium Copyright Act (DMCA). Because artists failed to show evidence of defendants altering or stripping this information, they must permanently drop the DMCA claims.

Part of Orrick’s decision on the DMCA claims, however, indicates that the legal basis for dismissal is “unsettled,” with Orrick simply agreeing with Stability AI’s unsettled argument that “because the output images are admittedly not identical to the Training Images, there can be no liability for any removal of CMI that occurred during the training process.”

Ortiz wrote on X that she respectfully disagreed with that part of the decision but expressed enthusiasm that the court allowed artists to proceed with false endorsement claims, alleging that Midjourney violated the Lanham Act.

Five artists successfully argued that because “their names appeared on the list of 4,700 artists posted by Midjourney’s CEO on Discord” and that list was used to promote “the various styles of artistic works its AI product could produce,” this plausibly created confusion over whether those artists had endorsed Midjourney.

“Whether or not a reasonably prudent consumer would be confused or misled by the Names List and showcase to conclude that the included artists were endorsing the Midjourney product can be tested at summary judgment,” Orrick wrote. “Discovery may show that it is or that is it not.”

While Orrick agreed with Midjourney that “plaintiffs have no protection over ‘simple, cartoony drawings’ or ‘gritty fantasy paintings,'” artists were able to advance a “trade dress” claim under the Lanham Act, too. This is because Midjourney allegedly “allows users to create works capturing the ‘trade dress of each of the Midjourney Named Plaintiffs [that] is inherently distinctive in look and feel as used in connection with their artwork and art products.'”

As discovery proceeds in the case, artists will also have an opportunity to amend dismissed claims of unjust enrichment. According to Orrick, their next amended complaint will be their last chance to prove that AI companies have “deprived plaintiffs ‘the benefit of the value of their works.'”

Saveri and Butterick confirmed that “though the Court dismissed certain supplementary claims, Plaintiffs’ central claims will now proceed to discovery and trial.” On X, Ortiz suggested that the artists’ case is “now potentially one of THE biggest copyright infringement and trade dress cases ever!”

“Looking forward to the next stage of our fight!” Ortiz wrote.

Artists claim “big” win in copyright suit fighting AI image generators Read More »

elon-musk-sues-openai,-sam-altman-for-making-a-“fool”-out-of-him

Elon Musk sues OpenAI, Sam Altman for making a “fool” out of him

“Altman’s long con” —

Elon Musk asks court to void Microsoft’s exclusive deal with OpenAI.

Elon Musk and Sam Altman share the stage in 2015, the same year that Musk alleged that Altman's

Enlarge / Elon Musk and Sam Altman share the stage in 2015, the same year that Musk alleged that Altman’s “deception” began.

After withdrawing his lawsuit in June for unknown reasons, Elon Musk has revived a complaint accusing OpenAI and its CEO Sam Altman of fraudulently inducing Musk to contribute $44 million in seed funding by promising that OpenAI would always open-source its technology and prioritize serving the public good over profits as a permanent nonprofit.

Instead, Musk alleged that Altman and his co-conspirators—”preying on Musk’s humanitarian concern about the existential dangers posed by artificial intelligence”—always intended to “betray” these promises in pursuit of personal gains.

As OpenAI’s technology advanced toward artificial general intelligence (AGI) and strove to surpass human capabilities, “Altman set the bait and hooked Musk with sham altruism then flipped the script as the non-profit’s technology approached AGI and profits neared, mobilizing Defendants to turn OpenAI, Inc. into their personal piggy bank and OpenAI into a moneymaking bonanza, worth billions,” Musk’s complaint said.

Where Musk saw OpenAI as his chance to fund a meaningful rival to stop Google from controlling the most powerful AI, Altman and others “wished to launch a competitor to Google” and allegedly deceived Musk to do it. According to Musk:

The idea Altman sold Musk was that a non-profit, funded and backed by Musk, would attract world-class scientists, conduct leading AI research and development, and, as a meaningful counterweight to Google’s DeepMind in the race for Artificial General Intelligence (“AGI”), decentralize its technology by making it open source. Altman assured Musk that the non-profit structure guaranteed neutrality and a focus on safety and openness for the benefit of humanity, not shareholder value. But as it turns out, this was all hot-air philanthropy—the hook for Altman’s long con.

Without Musk’s involvement and funding during OpenAI’s “first five critical years,” Musk’s complaint said, “it is fair to say” that “there would have been no OpenAI.” And when Altman and others repeatedly approached Musk with plans to shift OpenAI to a for-profit model, Musk held strong to his morals, conditioning his ongoing contributions on OpenAI remaining a nonprofit and its tech largely remaining open source.

“Either go do something on your own or continue with OpenAI as a nonprofit,” Musk told Altman in 2018 when Altman tried to “recast the nonprofit as a moneymaking endeavor to bring in shareholders, sell equity, and raise capital.”

“I will no longer fund OpenAI until you have made a firm commitment to stay, or I’m just being a fool who is essentially providing free funding to a startup,” Musk said at the time. “Discussions are over.”

But discussions weren’t over. And now Musk seemingly does feel like a fool after OpenAI exclusively licensed GPT-4 and all “pre-AGI” technology to Microsoft in 2023, while putting up paywalls and “failing to publicly disclose the non-profit’s research and development, including details on GPT-4, GPT-4T, and GPT-4o’s architecture, hardware, training method, and training computation.” This excluded the public “from open usage of GPT-4 and related technology to advance Defendants and Microsoft’s own commercial interests,” Musk alleged.

Now Musk has revived his suit against OpenAI, asking the court to award maximum damages for OpenAI’s alleged fraud, contract breaches, false advertising, acts viewed as unfair to competition, and other violations.

He has also asked the court to determine a very technical question: whether OpenAI’s most recent models should be considered AGI and therefore Microsoft’s license voided. That’s the only way to ensure that a private corporation isn’t controlling OpenAI’s AGI models, which Musk repeatedly conditioned his financial contributions upon preventing.

“Musk contributed considerable money and resources to launch and sustain OpenAI, Inc., which was done on the condition that the endeavor would be and remain a non-profit devoted to openly sharing its technology with the public and avoid concentrating its power in the hands of the few,” Musk’s complaint said. “Defendants knowingly and repeatedly accepted Musk’s contributions in order to develop AGI, with no intention of honoring those conditions once AGI was in reach. Case in point: GPT-4, GPT-4T, and GPT-4o are all closed source and shrouded in secrecy, while Defendants actively work to transform the non-profit into a thoroughly commercial business.”

Musk wants Microsoft’s GPT-4 license voided

Musk also asked the court to null and void OpenAI’s exclusive license to Microsoft, or else determine “whether GPT-4, GPT-4T, GPT-4o, and other OpenAI next generation large language models constitute AGI and are thus excluded from Microsoft’s license.”

It’s clear that Musk considers these models to be AGI, and he’s alleged that Altman’s current control of OpenAI’s Board—after firing dissidents in 2023 whom Musk claimed tried to get Altman ousted for prioritizing profits over AI safety—gives Altman the power to obscure when OpenAI’s models constitute AGI.

Elon Musk sues OpenAI, Sam Altman for making a “fool” out of him Read More »

ai’s-future-in-grave-danger-from-nvidia’s-chokehold-on-chips,-groups-warn

AI’s future in grave danger from Nvidia’s chokehold on chips, groups warn

Controlling “the world’s computing destiny” —

Anti-monopoly groups want DOJ to probe Nvidia’s AI chip bundling, alleged price-fixing.

AI’s future in grave danger from Nvidia’s chokehold on chips, groups warn

Sen. Elizabeth Warren (D-Mass.) has joined progressive groups—including Demand Progress, Open Markets Institute, and the Tech Oversight Project—pressuring the US Department of Justice to investigate Nvidia’s dominance in the AI chip market due to alleged antitrust concerns, Reuters reported.

In a letter to the DOJ’s chief antitrust enforcer, Jonathan Kanter, groups demanding more Big Tech oversight raised alarms that Nvidia’s top rivals apparently “are struggling to gain traction” because “Nvidia’s near-absolute dominance of the market is difficult to counter” and “funders are wary of backing its rivals.”

Nvidia is currently “the world’s most valuable public company,” their letter said, worth more than $3 trillion after taking near-total control of the high-performance AI chip market. Particularly “astonishing,” the letter said, was Nvidia’s dominance in the market for GPU accelerator chips, which are at the heart of today’s leading AI. Groups urged Kanter to probe Nvidia’s business practices to ensure that rivals aren’t permanently blocked from competing.

According to the advocacy groups that strongly oppose Big Tech monopolies, Nvidia “now holds an 80 percent overall global market share in GPU chips and a 98 percent share in the data center market.” This “puts it in a position to crowd out competitors and set global pricing and the terms of trade,” the letter warned.

Earlier this year, inside sources reported that the DOJ and the Federal Trade Commission reached a deal where the DOJ would probe Nvidia’s alleged anti-competitive behavior in the booming AI industry, and the FTC would probe OpenAI and Microsoft. But there has been no official Nvidia probe announced, prompting progressive groups to push harder for the DOJ to recognize what they view as a “dire danger to the open market” that “well deserves DOJ scrutiny.”

Ultimately, the advocacy groups told Kanter that they fear Nvidia wielding “control over the world’s computing destiny,” noting that Nvidia’s cloud computing data centers don’t just power “Big Tech’s consumer products” but also “underpin every aspect of contemporary society, including the financial system, logistics, healthcare, and defense.”

They claimed that Nvidia is “leveraging” its “scarce chips” to force customers to buy its “chips, networking, and programming software as a package.” Such bundling and “price-fixing,” their letter warned, appear to be “the same kinds of anti-competitive tactics that the courts, in response to actions brought by the Department of Justice against other companies, have found to be illegal” and could perhaps “stifle innovation.”

Although data from TechInsights suggested that Nvidia’s chip shortage and cost actually helped companies like AMD and Intel sell chips in 2023, both Nvidia rivals reported losses in market share earlier this year, Yahoo Finance reported.

Perhaps most closely monitoring Nvidia’s dominance, France antitrust authorities launched an investigation into Nvidia last month over antitrust concerns, the letter said, “making it the first enforcer to act against the computer chip maker,” Reuters reported.

Since then, the European Union and the United Kingdom, as well as the US, have heightened scrutiny, but their seeming lag to follow through with an official investigation may only embolden Nvidia, as the company allegedly “believes its market behavior is above the law,” the progressive groups wrote. Suspicious behavior includes allegations that “Nvidia has continued to sell chips to Chinese customers and provide them computing access” despite a “Department of Commerce ban on trading with Chinese companies due to national security and human rights concerns.”

“Its chips have been confirmed to be reaching blacklisted Chinese entities,” their letter warned, citing a Wall Street Journal report.

Nvidia’s dominance apparently impacts everyone involved with AI. According to the letter, Nvidia seemingly “determining who receives inventory from a limited supply, setting premium pricing, and contractually blocking customers from doing business with competitors” is “alarming” the entire AI industry. That includes “both small companies (who find their supply choked off) and the Big Tech AI giants.”

Kanter will likely be receptive to the letter. In June, Fast Company reported that Kanter told an audience at an AI conference that there are “structures and trends in AI that should give us pause.” He further suggested that any technology that “relies on massive amounts of data and computing power” can “give already dominant firms a substantial advantage,” according to Fast Company’s summary of his remarks.

AI’s future in grave danger from Nvidia’s chokehold on chips, groups warn Read More »