Biz & IT

“in-10-years,-all-bets-are-off”—anthropic-ceo-opposes-decadelong-freeze-on-state-ai-laws

“In 10 years, all bets are off”—Anthropic CEO opposes decadelong freeze on state AI laws

On Thursday, Anthropic CEO Dario Amodei argued against a proposed 10-year moratorium on state AI regulation in a New York Times opinion piece, calling the measure shortsighted and overbroad as Congress considers including it in President Trump’s tax policy bill. Anthropic makes Claude, an AI assistant similar to ChatGPT.

Amodei warned that AI is advancing too fast for such a long freeze, predicting these systems “could change the world, fundamentally, within two years; in 10 years, all bets are off.”

As we covered in May, the moratorium would prevent states from regulating AI for a decade. A bipartisan group of state attorneys general has opposed the measure, which would preempt AI laws and regulations recently passed in dozens of states.

In his op-ed piece, Amodei said the proposed moratorium aims to prevent inconsistent state laws that could burden companies or compromise America’s competitive position against China. “I am sympathetic to these concerns,” Amodei wrote. “But a 10-year moratorium is far too blunt an instrument. A.I. is advancing too head-spinningly fast.”

Instead of a blanket moratorium, Amodei proposed that the White House and Congress create a federal transparency standard requiring frontier AI developers to publicly disclose their testing policies and safety measures. Under this framework, companies working on the most capable AI models would need to publish on their websites how they test for various risks and what steps they take before release.

“Without a clear plan for a federal response, a moratorium would give us the worst of both worlds—no ability for states to act and no national policy as a backstop,” Amodei wrote.

Transparency as the middle ground

Amodei emphasized his claims for AI’s transformative potential throughout his op-ed, citing examples of pharmaceutical companies drafting clinical study reports in minutes instead of weeks and AI helping to diagnose medical conditions that might otherwise be missed. He wrote that AI “could accelerate economic growth to an extent not seen for a century, improving everyone’s quality of life,” a claim that some skeptics believe may be overhyped.

“In 10 years, all bets are off”—Anthropic CEO opposes decadelong freeze on state AI laws Read More »

two-certificate-authorities-booted-from-the-good-graces-of-chrome

Two certificate authorities booted from the good graces of Chrome

Google says its Chrome browser will stop trusting certificates from two certificate authorities after “patterns of concerning behavior observed over the past year” diminished trust in their reliability.

The two organizations, Taiwan-based Chunghwa Telecom and Budapest-based Netlock, are among the hundreds of certificate authorities trusted by Chrome and most other browsers to provide digital certificates that encrypt traffic and certify the authenticity of sites. With the ability to mint cryptographic credentials that cause address bars to display a padlock, assuring the trustworthiness of a site, these certificate authorities wield significant control over the security of the web.

Inherent risk

“Over the past several months and years, we have observed a pattern of compliance failures, unmet improvement commitments, and the absence of tangible, measurable progress in response to publicly disclosed incident reports,” members of the Chrome security team wrote Tuesday. “When these factors are considered in aggregate and considered against the inherent risk each publicly-trusted CA poses to the internet, continued public trust is no longer justified.”

Two certificate authorities booted from the good graces of Chrome Read More »

meta-and-yandex-are-de-anonymizing-android-users’-web-browsing-identifiers

Meta and Yandex are de-anonymizing Android users’ web browsing identifiers


Abuse allows Meta and Yandex to attach persistent identifiers to detailed browsing histories.

Credit: Aurich Lawson | Getty Images

Credit: Aurich Lawson | Getty Images

Tracking code that Meta and Russia-based Yandex embed into millions of websites is de-anonymizing visitors by abusing legitimate Internet protocols, causing Chrome and other browsers to surreptitiously send unique identifiers to native apps installed on a device, researchers have discovered. Google says it’s investigating the abuse, which allows Meta and Yandex to convert ephemeral web identifiers into persistent mobile app user identities.

The covert tracking—implemented in the Meta Pixel and Yandex Metrica trackers—allows Meta and Yandex to bypass core security and privacy protections provided by both the Android operating system and browsers that run on it. Android sandboxing, for instance, isolates processes to prevent them from interacting with the OS and any other app installed on the device, cutting off access to sensitive data or privileged system resources. Defenses such as state partitioning and storage partitioning, which are built into all major browsers, store site cookies and other data associated with a website in containers that are unique to every top-level website domain to ensure they’re off-limits for every other site.

A blatant violation

“One of the fundamental security principles that exists in the web, as well as the mobile system, is called sandboxing,” Narseo Vallina-Rodriguez, one of the researchers behind the discovery, said in an interview. “You run everything in a sandbox, and there is no interaction within different elements running on it. What this attack vector allows is to break the sandbox that exists between the mobile context and the web context. The channel that exists allowed the Android system to communicate what happens in the browser with the identity running in the mobile app.”

The bypass—which Yandex began in 2017 and Meta started last September—allows the companies to pass cookies or other identifiers from Firefox and Chromium-based browsers to native Android apps for Facebook, Instagram, and various Yandex apps. The companies can then tie that vast browsing history to the account holder logged into the app.

This abuse has been observed only in Android, and evidence suggests that the Meta Pixel and Yandex Metrica target only Android users. The researchers say it may be technically feasible to target iOS because browsers on that platform allow developers to programmatically establish localhost connections that apps can monitor on local ports.

In contrast to iOS, however, Android imposes fewer controls on local host communications and background executions of mobile apps, the researchers said, while also implementing stricter controls in app store vetting processes to limit such abuses. This overly permissive design allows Meta Pixel and Yandex Metrica to send web requests with web tracking identifiers to specific local ports that are continuously monitored by the Facebook, Instagram, and Yandex apps. These apps can then link pseudonymous web identities with actual user identities, even in private browsing modes, effectively de-anonymizing users’ browsing habits on sites containing these trackers.

Meta Pixel and Yandex Metrica are analytics scripts designed to help advertisers measure the effectiveness of their campaigns. Meta Pixel and Yandex Metrica are estimated to be installed on 5.8 million and 3 million sites, respectively.

Meta and Yandex achieve the bypass by abusing basic functionality built into modern mobile browsers that allows browser-to-native app communications. The functionality lets browsers send web requests to local Android ports to establish various services, including media connections through the RTC protocol, file sharing, and developer debugging.

A conceptual diagram representing the exchange of identifiers between the web trackers running on the browser context and native Facebook, Instagram, and Yandex apps for Android.

A conceptual diagram representing the exchange of identifiers between the web trackers running on the browser context and native Facebook, Instagram, and Yandex apps for Android.

While the technical underpinnings differ, both Meta Pixel and Yandex Metrica are performing a “weird protocol misuse” to gain unvetted access that Android provides to localhost ports on the 127.0.0.1 IP address. Browsers access these ports without user notification. Facebook, Instagram, and Yandex native apps silently listen on those ports, copy identifiers in real time, and link them to the user logged into the app.

A representative for Google said the behavior violates the terms of service for its Play marketplace and the privacy expectations of Android users.

“The developers in this report are using capabilities present in many browsers across iOS and Android in unintended ways that blatantly violate our security and privacy principles,” the representative said, referring to the people who write the Meta Pixel and Yandex Metrica JavaScript. “We’ve already implemented changes to mitigate these invasive techniques and have opened our own investigation and are directly in touch with the parties.”

Meta didn’t answer emailed questions for this article, but provided the following statement: “We are in discussions with Google to address a potential miscommunication regarding the application of their policies. Upon becoming aware of the concerns, we decided to pause the feature while we work with Google to resolve the issue.”

Yandex representatives didn’t answer an email seeking comment.

How Meta and Yandex de-anonymize Android users

Meta Pixel developers have abused various protocols to implement the covert listening since the practice began last September. They started by causing apps to send HTTP requests to port 12387. A month later, Meta Pixel stopped sending this data, even though Facebook and Instagram apps continued to monitor the port.

In November, Meta Pixel switched to a new method that invoked WebSocket, a protocol for two-way communications, over port 12387.

That same month, Meta Pixel also deployed a new method that used WebRTC, a real-time peer-to-peer communication protocol commonly used for making audio or video calls in the browser. This method used a complicated process known as SDP munging, a technique for JavaScript code to modify Session Description Protocol data before it’s sent. Still in use today, the SDP munging by Meta Pixel inserts key _fbp cookie content into fields meant for connection information. This causes the browser to send that data as part of a STUN request to the Android local host, where the Facebook or Instagram app can read it and link it to the user.

In May, a beta version of Chrome introduced a mitigation that blocked the type of SDP munging that Meta Pixel used. Within days, Meta Pixel circumvented the mitigation by adding a new method that swapped the STUN requests with the TURN requests.

In a post, the researchers provided a detailed description of the _fbp cookie from a website to the native app and, from there, to the Meta server:

1. The user opens the native Facebook or Instagram app, which eventually is sent to the background and creates a background service to listen for incoming traffic on a TCP port (12387 or 12388) and a UDP port (the first unoccupied port in 12580–12585). Users must be logged-in with their credentials on the apps.

2. The user opens their browser and visits a website integrating the Meta Pixel.

3. At this stage, some websites wait for users’ consent before embedding Meta Pixel. In our measurements of the top 100K website homepages, we found websites that require consent to be a minority (more than 75% of affected sites does not require user consent)…

4. The Meta Pixel script is loaded and the _fbp cookie is sent to the native Instagram or Facebook app via WebRTC (STUN) SDP Munging.

5. The Meta Pixel script also sends the _fbp value in a request to https://www.facebook.com/tr along with other parameters such as page URL (dl), website and browser metadata, and the event type (ev) (e.g., PageView, AddToCart, Donate, Purchase).

6. The Facebook or Instagram apps receive the _fbp cookie from the Meta JavaScripts running on the browser and transmits it to the GraphQL endpoint (https://graph[.]facebook[.]com/graphql) along with other persistent user identifiers, linking users’ fbp ID (web visit) with their Facebook or Instagram account.

Detailed flow of the way the Meta Pixel leaks the _fbp cookie from Android browsers to it’s Facebook and Instagram apps.

Detailed flow of the way the Meta Pixel leaks the _fbp cookie from Android browsers to it’s Facebook and Instagram apps.

The first known instance of Yandex Metrica linking websites visited in Android browsers to app identities was in May 2017, when the tracker started sending HTTP requests to local ports 29009 and 30102. In May 2018, Yandex Metrica also began sending the data through HTTPS to ports 29010 and 30103. Both methods remained in place as of publication time.

An overview of Yandex identifier sharing

An overview of Yandex identifier sharing

A timeline of web history tracking by Meta and Yandex

A timeline of web history tracking by Meta and Yandex

Some browsers for Android have blocked the abusive JavaScript in trackers. DuckDuckGo, for instance, was already blocking domains and IP addresses associated with the trackers, preventing the browser from sending any identifiers to Meta. The browser also blocked most of the domains associated with Yandex Metrica. After the researchers notified DuckDuckGo of the incomplete blacklist, developers added the missing addresses.

The Brave browser, meanwhile, also blocked the sharing of identifiers due to its extensive blocklists and existing mitigation to block requests to the localhost without explicit user consent. Vivaldi, another Chromium-based browser, forwards the identifiers to local Android ports when the default privacy setting is in place. Changing the setting to block trackers appears to thwart browsing history leakage, the researchers said.

Tracking blocker settings in Vivaldi for Android.

There’s got to be a better way

The various remedies DuckDuckGo, Brave, Vivaldi, and Chrome have put in place are working as intended, but the researchers caution they could become ineffective at any time.

“Any browser doing blocklisting will likely enter into a constant arms race, and it’s just a partial solution,” Vallina Rodriguez said of the current mitigations. “Creating effective blocklists is hard, and browser makers will need to constantly monitor the use of this type of capability to detect other hostnames potentially abusing localhost channels and then updating their blocklists accordingly.”

He continued:

While this solution works once you know the hostnames doing that, it’s not the right way of mitigating this issue, as trackers may find ways of accessing this capability (e.g., through more ephemeral hostnames). A long-term solution should go through the design and development of privacy and security controls for localhost channels, so that users can be aware of this type of communication and potentially enforce some control or limit this use (e.g., a permission or some similar user notifications).

Chrome and most other Chromium-based browsers executed the JavaScript as Meta and Yandex intended. Firefox did as well, although for reasons that aren’t clear, the browser was not able to successfully perform the SDP munging specified in later versions of the code. After blocking the STUN variant of SDP munging in the early May beta release, a production version of Chrome released two weeks ago began blocking both the STUN and TURN variants. Other Chromium-based browsers are likely to implement it in the coming weeks. Firefox didn’t respond to an email asking if it has plans to block the behavior in that browser.

The researchers warn that the current fixes are so specific to the code in the Meta and Yandex trackers that it would be easy to bypass them with a simple update.

“They know that if someone else comes in and tries a different port number, they may bypass this protection,” said Gunes Acar, the researcher behind the initial discovery, referring to the Chrome developer team at Google. “But our understanding is they want to send this message that they will not tolerate this form of abuse.”

Fellow researcher Vallina-Rodriguez said the more comprehensive way to prevent the abuse is for Android to overhaul the way it handles access to local ports.

“The fundamental issue is that the access to the local host sockets is completely uncontrolled on Android,” he explained. “There’s no way for users to prevent this kind of communication on their devices. Because of the dynamic nature of JavaScript code and the difficulty to keep blocklists up to date, the right way of blocking this persistently is by limiting this type of access at the mobile platform and browser level, including stricter platform policies to limit abuse.”

Got consent?

The researchers who made this discovery are:

  • Aniketh Girish, PhD student at IMDEA Networks
  • Gunes Acar, assistant professor in Radboud University’s Digital Security Group & iHub
  • Narseo Vallina-Rodriguez, associate professor at IMDEA Networks
  • Nipuna Weerasekara, PhD student at IMDEA Networks
  • Tim Vlummens, PhD student at COSIC, KU Leuven

Acar said he first noticed Meta Pixel accessing local ports while visiting his own university’s website.

There’s no indication that Meta or Yandex has disclosed the tracking to either websites hosting the trackers or end users who visit those sites. Developer forums show that many websites using Meta Pixel were caught off guard when the scripts began connecting to local ports.

“Since 5th September, our internal JS error tracking has been flagging failed fetch requests to localhost: 12387,” one developer wrote. “No changes have been made on our side, and the existing Facebook tracking pixel we use loads via Google Tag Manager.”

“Is there some way I can disable this?” another developer encountering the unexplained local port access asked.

It’s unclear whether browser-to-native-app tracking violates any privacy laws in various countries. Both Meta and companies hosting its Meta Pixel, however, have faced a raft of lawsuits in recent years alleging that the data collected violates privacy statutes. A research paper from 2023 found that Meta pixel, then called the Facebook Pixel, “tracks a wide range of user activities on websites with alarming detail, especially on websites classified as sensitive categories under GDPR,” the abbreviation for the European Union’s General Data Protection Regulation.

So far, Google has provided no indication that it plans to redesign the way Android handles local port access. For now, the most comprehensive protection against Meta Pixel and Yandex Metrica tracking is to refrain from installing the Facebook, Instagram, or Yandex apps on Android devices.

Photo of Dan Goodin

Dan Goodin is Senior Security Editor at Ars Technica, where he oversees coverage of malware, computer espionage, botnets, hardware hacking, encryption, and passwords. In his spare time, he enjoys gardening, cooking, and following the independent music scene. Dan is based in San Francisco. Follow him at here on Mastodon and here on Bluesky. Contact him on Signal at DanArs.82.

Meta and Yandex are de-anonymizing Android users’ web browsing identifiers Read More »

broadcom-ends-business-with-vmware’s-lowest-tier-channel-partners

Broadcom ends business with VMware’s lowest-tier channel partners

Broadcom has cut the lowest tier in its VMware partner program. The move allows the enterprise technology firm to continue its focus on customers with larger VMware deployments, but it also risks more migrations from VMware users and partners.

Broadcom ousts low-tier VMware partners

In a blog post on Sunday, Broadcom executive Brian Moats announced that the Broadcom Advantage Partner Program for VMware Resellers, which became the VMware partner program after Broadcom eliminated the original one in January 2024, would now offer three tiers instead of four. Broadcom is killing the Registered tier, leaving the Pinnacle, Premier, and Select tiers.

The reduction is a result of Broadcom’s “strategic direction” and a “comprehensive partner review” and affects VMware’s Americas, Asia-Pacific, and Japan geographies, Moats wrote. Affected partners are receiving 60 days’ notice, Laura Falko, Broadcom’s head of global partner programs, marketing, and experience, told The Register.

Moats wrote that the “vast majority of customer impact and business momentum comes from partners operating within the top three tiers.”

Similarly, Falko told The Register that most of the removed partners were “inactive and lack the capabilities to support customers through VMware’s evolving private cloud journey.”

Ars asked Broadcom to specify how many removed partners were inactive and what specific capabilities they lacked, but a company representative only directed us to Moats’ blog post.

Canadian managed services provider (MSP) Members IT Group is one of the partners that learned this week that it will no longer be a VMware reseller. CTO Dean Colpitts noted that Members IT Group has been a VMware partner for over 19 years and is also a VMware user. Colpitts previously told Ars that the firm’s VMware business had declined since Broadcom’s acquisition and blamed Broadcom for this:

The only reason we were “inactive” is because of their own stupid greed. We and our customers would have happily continued along even with a 10 or 20 percent  increase in price. 50 percent and more with zero warning last year after customers already had their FY24 budgets sets was the straw that broke the camel’s back …

We have transacted a couple of deals with [VMware] since the program change, but nothing like we previously would have done before Broadcom took over.

Members IT Group will be moving its client base to Hewlett-Packard Enterprise’s VM Essentials virtualization solution.

Broadcom ends business with VMware’s lowest-tier channel partners Read More »

ransomware-kingpin-“stern”-apparently-ided-by-german-law-enforcement

Ransomware kingpin “Stern” apparently IDed by German law enforcement


unlikely to be extradited

BSA names Vi­ta­ly Ni­ko­lae­vich Kovalev is “Stern,” the leader of Trickbot.

Credit: Tim Robberts/Getty Images

For years, members of the Russian cybercrime cartel Trickbot unleashed a relentless hacking spree on the world. The group attacked thousands of victims, including businesses, schools, and hospitals. “Fuck clinics in the usa this week,” one member wrote in internal Trickbot messages in 2020 about a list of 428 hospitals to target. Orchestrated by an enigmatic leader using the online moniker “Stern,” the group of around 100 cybercriminals stole hundreds of millions of dollars over the course of roughly six years.

Despite a wave of law enforcement disruptions and a damaging leak of more than 60,000 internal chat messages from Trickbot and the closely associated counterpart group Conti, the identity of Stern has remained a mystery. Last week, though, Germany’s federal police agency, the Bundeskriminalamt or BKA, and local prosecutors alleged that Stern’s real-world name is Vi­ta­ly Ni­ko­lae­vich Kovalev, a 36-year-old, 5-foot-11-inch Russian man who cops believe is in his home country and thus shielded from potential extradition.

A recently issued Interpol red notice says that Kovalev is wanted by Germany for allegedly being the “ringleader” of a “criminal organisation.”

“Stern’s naming is a significant event that bridges gaps in our understanding of Trickbot—one of the most notorious transnational cybercriminal groups to ever exist,” says Alexander Leslie, a threat intelligence analyst at the security firm Recorded Future. “As Trickbot’s ‘big boss’ and one of the most noteworthy figures in the Russian cybercriminal underground, Stern remained an elusive character, and his real name was taboo for years.”

Stern has notably seemed to be absent from multiple rounds of Western sanctions and indictments in recent years calling out alleged Trickbot and Conti members. Leslie and other researchers have long speculated to WIRED that global law enforcement may have strategically withheld Stern’s alleged identity as part of ongoing investigations. Kovalev is suspected of being the “founder” of Trickbot and allegedly used the Stern moniker, the BKA said in an online announcement.

“It has long been assumed, based on numerous indications, that ‘Stern’ is in fact Kovalev,” a BKA spokesperson says in written responses to questions from WIRED. They add that “the investigating authorities involved in Operation Endgame were only able to identify the actor Stern as Kovalev during their investigation this year,” referring to a multi-year international effort to identify and disrupt cybercriminal infrastructure, known as Operation Endgame.

The BKA spokesperson also notes in written statements to WIRED that information obtained through a 2023 investigation into the Qakbot malware as well as analysis of the leaked Trickbot and Conti chats from 2022 were “helpful” in making the attribution. They added, too, that the “assessment is also shared by international partners.”

The German announcement is the first time that officials from any government have publicly alleged an identity for a suspect behind the Stern moniker. As part of Operation Endgame, BKA’s Stern attribution inherently comes in the context of a multinational law enforcement collaboration. But unlike in other Trickbot- and Conti-related attributions, other countries have not publicly concurred with BKA’s Stern identification thus far. Europol, the US Department of Justice, the US Treasury, and the UK’s Foreign, Commonwealth & Development Office did not immediately respond to WIRED’s requests for comment.

Several cybersecurity researchers who have tracked Trickbot extensively tell WIRED they were unaware of the announcement. An anonymous account on the social media platform X recently claimed that Kovalev used the Stern handle and published alleged details about him. WIRED messaged multiple accounts that supposedly belong to Kovalev, according to the X account and a database of hacked and leaked records compiled by District 4 Labs but received no response.

Meanwhile, Kovalev’s name and face may already be surprisingly familiar to those who have been following recent Trickbot revelations. This is because Kovalev was jointly sanctioned by the United States and United Kingdom in early 2023 for his alleged involvement as a senior member in Trickbot. He was also charged in the US at the time with hacking linked to bank fraud allegedly committed in 2010. The US added him to its most-wanted list. In all of this activity, though, the US and UK linked Kovalev to the online handles “ben” and “Bentley.” The 2023 sanctions did not mention a connection to the Stern handle. And, in fact, Kovalev’s 2023 indictment was mainly noteworthy because his use of “Bentley” as a handle was determined to be “historic” and distinct from that of another key Trickbot member who also went by “Bentley.”

The Trickbot ransomware group first emerged around 2016, after its members moved from the Dyre malware that was disrupted by Russian authorities. Over the course of its lifespan, the Trickbot group—which used its namesake malware, alongside other ransomware variants such as Ryuk, IcedID, and Diavol—increasingly overlapped in operations and personnel with the Conti gang. In early 2022, Conti published a statement backing Russia’s full-scale invasion of Ukraine, and a cybersecurity researcher who had infiltrated the groups leaked more than 60,000 messages from Trickbot and Conti members, revealing a huge trove of information about their day-to-day operations and structure.

Stern acted like a “CEO” of the Trickbot and Conti groups and ran them like a legitimate company, leaked chat messages analyzed by WIRED and security researchers show.

“Trickbot set the mold for the modern ‘as-a-service’ cybercriminal business model that was adopted by countless groups that followed,” Recorded Future’s Leslie says. “While there were certainly organized groups that preceded Trickbot, Stern oversaw a period of Russian cybercrime that was characterized by a high level of professionalization. This trend continues today, is reproduced worldwide, and is visible in most active groups on the dark web.”

Stern’s eminence within Russian cybercrime has been widely documented. The cryptocurrency-tracing firm Chainalysis does not publicly name cybercriminal actors and declined to comment on BKA’s identification, but the company emphasized that the Stern persona alone is one of the all-time most profitable ransomware actors it tracks.

“The investigation revealed that Stern generated significant revenues from illegal activities, in particular in connection with ransomware,” the BKA spokesperson tells WIRED.

Stern “surrounds himself with very technical people, many of which he claims to have sometimes decades of experience, and he’s willing to delegate substantial tasks to these experienced people whom he trusts,” says Keith Jarvis, a senior security researcher at cybersecurity firm Sophos’ Counter Threat Unit. “I think he’s always probably lived in that organizational role.”

Increasing evidence in recent years has indicated that Stern has at least some loose connections to Russia’s intelligence apparatus, including its main security agency, the Federal Security Service (FSB). The Stern handle mentioned setting up an office for “government topics” in July 2020, while researchers have seen other members of the Trickbot group say that Stern is likely the “link between us and the ranks/head of department type at FSB.”

Stern’s consistent presence was a significant contributor to Trickbot and Conti’s effectiveness—as was the entity’s ability to maintain strong operational security and remain hidden.

As Sophos’ Jarvis put it, “I have no thoughts on the attribution, as I’ve never heard a compelling story about Stern’s identity from anyone prior to this announcement.”

This story originally appeared on wired.com.

Photo of WIRED

Wired.com is your essential daily guide to what’s next, delivering the most original and complete take you’ll find anywhere on innovation’s impact on technology, science, business and culture.

Ransomware kingpin “Stern” apparently IDed by German law enforcement Read More »

ai-video-just-took-a-startling-leap-in-realism.-are-we-doomed?

AI video just took a startling leap in realism. Are we doomed?


Tales from the cultural singularity

Google’s Veo 3 delivers AI videos of realistic people with sound and music. We put it to the test.

Still image from an AI-generated Veo 3 video of “A 1980s fitness video with models in leotards wearing werewolf masks.” Credit: Google

Last week, Google introduced Veo 3, its newest video generation model that can create 8-second clips with synchronized sound effects and audio dialog—a first for the company’s AI tools. The model, which generates videos at 720p resolution (based on text descriptions called “prompts” or still image inputs), represents what may be the most capable consumer video generator to date, bringing video synthesis close to a point where it is becoming very difficult to distinguish between “authentic” and AI-generated media.

Google also launched Flow, an online AI filmmaking tool that combines Veo 3 with the company’s Imagen 4 image generator and Gemini language model, allowing creators to describe scenes in natural language and manage characters, locations, and visual styles in a web interface.

An AI-generated video from Veo 3: “ASMR scene of a woman whispering “Moonshark” into a microphone while shaking a tambourine”

Both tools are now available to US subscribers of Google AI Ultra, a plan that costs $250 a month and comes with 12,500 credits. Veo 3 videos cost 150 credits per generation, allowing 83 videos on that plan before you run out. Extra credits are available for the price of 1 cent per credit in blocks of $25, $50, or $200. That comes out to about $1.50 per video generation. But is the price worth it? We ran some tests with various prompts to see what this technology is truly capable of.

How does Veo work?

Like other modern video generation models, Veo 3 is built on diffusion technology—the same approach that powers image generators like Stable Diffusion and Flux. The training process works by taking real videos and progressively adding noise to them until they become pure static, then teaching a neural network to reverse this process step by step. During generation, Veo 3 starts with random noise and a text prompt, then iteratively refines that noise into a coherent video that matches the description.

AI-generated video from Veo 3: “An old professor in front of a class says, ‘Without a firm historical context, we are looking at the dawn of a new era of civilization: post-history.'”

DeepMind won’t say exactly where it sourced the content to train Veo 3, but YouTube is a strong possibility. Google owns YouTube, and DeepMind previously told TechCrunch that Google models like Veo “may” be trained on some YouTube material.

It’s important to note that Veo 3 is a system composed of a series of AI models, including a large language model (LLM) to interpret user prompts to assist with detailed video creation, a video diffusion model to create the video, and an audio generation model that applies sound to the video.

An AI-generated video from Veo 3: “A male stand-up comic on stage in a night club telling a hilarious joke about AI and crypto with a silly punchline.” An AI language model built into Veo 3 wrote the joke.

In an attempt to prevent misuse, DeepMind says it’s using its proprietary watermarking technology, SynthID, to embed invisible markers into frames Veo 3 generates. These watermarks persist even when videos are compressed or edited, helping people potentially identify AI-generated content. As we’ll discuss more later, though, this may not be enough to prevent deception.

Google also censors certain prompts and outputs that breach the company’s content agreement. During testing, we encountered “generation failure” messages for videos that involve romantic and sexual material, some types of violence, mentions of certain trademarked or copyrighted media properties, some company names, certain celebrities, and some historical events.

Putting Veo 3 to the test

Perhaps the biggest change with Veo 3 is integrated audio generation, although Meta previewed a similar audio-generation capability with “Movie Gen” last October, and AI researchers have experimented with using AI to add soundtracks to silent videos for some time. Google DeepMind itself showed off an AI soundtrack-generating model in June 2024.

An AI-generated video from Veo 3: “A middle-aged balding man rapping indie core about Atari, IBM, TRS-80, Commodore, VIC-20, Atari 800, NES, VCS, Tandy 100, Coleco, Timex-Sinclair, Texas Instruments”

Veo 3 can generate everything from traffic sounds to music and character dialogue, though our early testing reveals occasional glitches. Spaghetti makes crunching sounds when eaten (as we covered last week, with a nod to the famous Will Smith AI spaghetti video), and in scenes with multiple people, dialogue sometimes comes from the wrong character’s mouth. But overall, Veo 3 feels like a step change in video synthesis quality and coherency over models from OpenAI, Runway, Minimax, Pika, Meta, Kling, and Hunyuanvideo.

The videos also tend to show garbled subtitles that almost match the spoken words, which is an artifact of subtitles on videos present in the training data. The AI model is imitating what it has “seen” before.

An AI-generated video from Veo 3: “A beer commercial for ‘CATNIP’ beer featuring a real a cat in a pickup truck driving down a dusty dirt road in a trucker hat drinking a can of beer while country music plays in the background, a man sings a jingle ‘Catnip beeeeeeeeeeeeeeeeer’ holding the note for 6 seconds”

We generated each of the eight-second-long 720p videos seen below using Google’s Flow platform. Each video generation took around three to five minutes to complete, and we paid for them ourselves. It’s important to note that better results come from cherry-picking—running the same prompt multiple times until you find a good result. Due to cost and in the spirit of testing, we only ran every prompt once, unless noted.

New audio prompts

Let’s dive right into the deep end with audio generation to get a grip on what this technology can do. We’ve previously shown you a man singing about spaghetti and a rapping shark in our last Veo 3 piece, but here’s some more complex dialogue.

Since 2022, we’ve been using the prompt “a muscular barbarian with weapons beside a CRT television set, cinematic, 8K, studio lighting” to test AI image generators like Midjourney. It’s time to bring that barbarian to life.

A muscular barbarian man holding an axe, standing next to a CRT television set. He looks at the TV, then to the camera and literally says, “You’ve been looking for this for years: a muscular barbarian with weapons beside a CRT television set, cinematic, 8K, studio lighting. Got that, Benj?”

The video above represents significant technical progress in AI media synthesis over the course of only three years. We’ve gone from a blurry colorful still-image barbarian to a photorealistic guy that talks to us in 720p high definition with audio. Most notably, there’s no reason to believe technical capability in AI generation will slow down from here.

Horror film: A scared woman in a Victorian outfit running through a forest, dolly shot, being chased by a man in a peanut costume screaming, “Wait! You forgot your wallet!”

Trailer for The Haunted Basketball Train: a Tim Burton film where 1990s basketball star is stuck at the end of a haunted passenger train with basketball court cars, and the only way to survive is to make it to the engine by beating different ghosts at basketball in every car

ASMR video of a muscular barbarian man whispering slowly into a microphone, “You love CRTs, don’t you? That’s OK. It’s OK to love CRT televisions and barbarians.”

1980s PBS show about a man with a beard talking about how his Apple II computer can “connect to the world through a series of tubes”

A 1980s fitness video with models in leotards wearing werewolf masks

A female therapist looking at the camera, zoom call. She says, “Oh my lord, look at that Atari 800 you have behind you! I can’t believe how nice it is!”

With this technology, one can easily imagine a virtual world of AI personalities designed to flatter people. This is a fairly innocent example about a vintage computer, but you can extrapolate, making the fake person talk about any topic at all. There are limits due to Google’s filters, but from what we’ve seen in the past, a future uncensored version of a similarly capable AI video generator is very likely.

Video call screenshot capture of a Zoom chat. A psychologist in a dark, cozy therapist’s office. The therapist says in a friendly voice, “Hi Tom, thanks for calling. Tell me about how you’re feeling today. Is the depression still getting to you? Let’s work on that.”

1960s NASA footage of the first man stepping onto the surface of the Moon, who squishes into a pile of mud and yells in a hillbilly voice, “What in tarnation??”

A local TV news interview of a muscular barbarian talking about why he’s always carrying a CRT TV set around with him

Speaking of fake news interviews, Veo 3 can generate plenty of talking anchor-persons, although sometimes on-screen text is garbled if you don’t specify exactly what it should say. It’s in cases like this where it seems Veo 3 might be most potent at casual media deception.

Footage from a news report about Russia invading the United States

Attempts at music

Veo 3’s AI audio generator can create music in various genres, although in practice, the results are typically simplistic. Still, it’s a new capability for AI video generators. Here are a few examples in various musical genres.

A PBS show of a crazy barbarian with a blonde afro painting pictures of Trees, singing “HAPPY BIG TREES” to some music while he paints

A 1950s cowboy rides up to the camera and sings in country music, “I love mah biiig ooold donkeee”

A 1980s hair metal band drives up to the camera and sings in rock music, “Help me with my huge huge huge hair!”

Mister Rogers’ Neighborhood PBS kids show intro done with psychedelic acid rock and colored lights

1950s musical jazz group with a scat singer singing about pickles amid gibberish

A trip-hop rap song about Ars Technica being sung by a guy in a large rubber shark costume on a stage with a full moon in the background

Some classic prompts from prior tests

The prompts below come from our previous video tests of Gen-3, Video-01, and the open source Hunyuanvideo, so you can flip back to those articles and compare the results if you want to. Overall, Veo 3 appears to have far greater temporal coherency (having a consistent subject or theme over time) than the earlier video synthesis models we’ve tested. But of course, it’s not perfect.

A highly intelligent person reading ‘Ars Technica’ on their computer when the screen explodes

The moonshark jumping out of a computer screen and attacking a person

A herd of one million cats running on a hillside, aerial view

Video game footage of a dynamic 1990s third-person 3D platform game starring an anthropomorphic shark boy

Aerial shot of a small American town getting deluged with liquid cheese after a massive cheese rainstorm where liquid cheese rained down and dripped all over the buildings

Wide-angle shot, starting with the Sasquatch at the center of the stage giving a TED talk about mushrooms, then slowly zooming in to capture its expressive face and gestures, before panning to the attentive audience

Some notable failures

Google’s Veo 3 isn’t perfect at synthesizing every scenario we can throw at it due to limitations of training data. As we noted in our previous coverage, AI video generators remain fundamentally imitative, making predictions based on statistical patterns rather than a true understanding of physics or how the world works.

For example, if you see mouths moving during speech, or clothes wrinkling in a certain way when touched, it means the neural network doing the video generation has “seen” enough similar examples of that scenario in the training data to render a convincing take on it and apply it to similar situations.

However, when a novel situation (or combination of themes) isn’t well-represented in the training data, you’ll see “impossible” or illogical things happen, such as weird body parts, magically appearing clothing, or an object that “shatters” but remains in the scene afterward, as you’ll see below.

We mentioned audio and video glitches in the introduction. In particular, scenes with multiple people sometimes confuse which character is speaking, such as this argument between tech fans.

A 2000s TV debate between fans of the PowerPC and Intel Pentium chips

Bombastic 1980s infomercial for the “Ars Technica” online service. With cheesy background music and user testimonials

1980s Rambo fighting Soviets on the Moon

Sometimes requests don’t make coherent sense. In this case, “Rambo” is correctly on the Moon firing a gun, but he’s not wearing a spacesuit. He’s a lot tougher than we thought.

An animated infographic showing how many floppy disks it would take to hold an installation of Windows 11

Large amounts of text also present a weak point, but if a short text quotation is explicitly specified in the prompt, Veo 3 usually gets it right.

A young woman doing a complex floor gymnastics routine at the Olympics, featuring running and flips

Despite Veo 3’s advances in temporal coherency and audio generation, it still suffers from the same “jabberwockies” we saw in OpenAI’s viral Sora gymnast video—those non-plausible video hallucinations like impossible morphing body parts.

A silly group of men and women cartwheeling across the road, singing “CHEEEESE” and holding the note for 8 seconds before falling over.

A YouTube-style try-on video of a person trying on various corncob costumes. They shout “Corncob haul!!”

A man made of glass runs into a brick wall and shatters, screaming

A man in a spacesuit holding up 5 fingers and counting down to zero, then blasting off into space with rocket boots

Counting down with fingers is difficult for Veo 3, likely because it’s not well-represented in the training data. Instead, hands are likely usually shown in a few positions like a fist, a five-finger open palm, a two-finger peace sign, and the number one.

As new architectures emerge and future models train on vastly larger datasets with exponentially more compute, these systems will likely forge deeper statistical connections between the concepts they observe in videos, dramatically improving quality and also the ability to generalize more with novel prompts.

The “cultural singularity” is coming—what more is left to say?

By now, some of you might be worried that we’re in trouble as a society due to potential deception from this kind of technology. And there’s a good reason to worry: The American pop culture diet currently relies heavily on clips shared by strangers through social media such as TikTok, and now all of that can easily be faked, whole-cloth. Automated generations of fake people can now argue for ideological positions in a way that could manipulate the masses.

AI-generated video by Veo 3: “A man on the street interview about someone who fears they live in a time where nothing can be believed”

Such videos could be (and were) manipulated before through various means prior to Veo 3, but now the barrier to entry has collapsed from requiring specialized skills, expensive software, and hours of painstaking work to simply typing a prompt and waiting three minutes. What once required a team of VFX artists or at least someone proficient in After Effects can now be done by anyone with a credit card and an Internet connection.

But let’s take a moment to catch our breath. At Ars Technica, we’ve been warning about the deceptive potential of realistic AI-generated media since at least 2019. In 2022, we talked about AI image generator Stable Diffusion and the ability to train people into custom AI image models. We discussed Sora “collapsing media reality” and talked about persistent media skepticism during the “deep doubt era.”

AI-generated video with Veo 3: “A man on the street ranting about the ‘cultural singularity’ and the ‘cultural apocalypse’ due to AI”

I also wrote in detail about the future ability for people to pollute the historical record with AI-generated noise. In that piece, I used the term “cultural singularity” to denote a time when truth and fiction in media become indistinguishable, not only because of the deceptive nature of AI-generated content but also due to the massive quantities of AI-generated and AI-augmented media we’ll likely soon be inundated with.

However, in an article I wrote last year about cloning my dad’s handwriting using AI, I came to the conclusion that my previous fears about the cultural singularity may be overblown. Media has always been vulnerable to forgery since ancient times; trust in any remote communication ultimately depends on trusting its source.

AI-generated video with Veo 3: “A news set. There is an ‘Ars Technica News’ logo behind a man. The man has a beard and a suit and is doing a sit-down interview. He says “This is the age of post-history: a new epoch of civilization where the historical record is so full of fabrication that it becomes effectively meaningless.”

The Romans had laws against forgery in 80 BC, and people have been doctoring photos since the medium’s invention. What has changed isn’t the possibility of deception but its accessibility and scale.

With Veo 3’s ability to generate convincing video with synchronized dialogue and sound effects, we’re not witnessing the birth of media deception—we’re seeing its mass democratization. What once cost millions of dollars in Hollywood special effects can now be created for pocket change.

An AI-generated video created with Google Veo-3: “A candid interview of a woman who doesn’t believe anything she sees online unless it’s on Ars Technica.”

As these tools become more powerful and affordable, skepticism in media will grow. But the question isn’t whether we can trust what we see and hear. It’s whether we can trust who’s showing it to us. In an era where anyone can generate a realistic video of anything for $1.50, the credibility of the source becomes our primary anchor to truth. The medium was never the message—the messenger always was.

Photo of Benj Edwards

Benj Edwards is Ars Technica’s Senior AI Reporter and founder of the site’s dedicated AI beat in 2022. He’s also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC.

AI video just took a startling leap in realism. Are we doomed? Read More »

thousands-of-asus-routers-are-being-hit-with-stealthy,-persistent-backdoors

Thousands of Asus routers are being hit with stealthy, persistent backdoors

GreyNoise said it detected the campaign in mid-March and held off reporting on it until after the company notified unnamed government agencies. That detail further suggests that the threat actor may have some connection to a nation-state.

The company researchers went on to say that the activity they observed was part of a larger campaign reported last week by fellow security company Sekoia. Researchers at Sekoia said that Internet scanning by network intelligence firm Censys suggested as many as 9,500 Asus routers may have been compromised by ViciousTrap, the name used to track the unknown threat actor.

The attackers are backdooring the devices by exploiting multiple vulnerabilities. One is CVE-2023-39780, a command-injection flaw that allows for the execution of system commands, which Asus patched in a recent firmware update, GreyNoise said. The remaining vulnerabilities have also been patched but, for unknown reasons, have not received CVE tracking designations.

The only way for router users to determine whether their devices are infected is by checking the SSH settings in the configuration panel. Infected routers will show that the device can be logged in to by SSH over port 53282 using a digital certificate with a truncated key of: ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAo41nBoVFfj4HlVMGV+YPsxMDrMlbdDZ…

To remove the backdoor, infected users should remove the key and the port setting.

People can also determine if they’ve been targeted if system logs indicate that they have been accessed through the IP addresses 101.99.91[.]151, 101.99.94[.]173, 79.141.163[.]179, or 111.90.146[.]237. Users of any router brand should always ensure their devices receive security updates in a timely manner.

Thousands of Asus routers are being hit with stealthy, persistent backdoors Read More »

where-hyperscale-hardware-goes-to-retire:-ars-visits-a-very-big-itad-site

Where hyperscale hardware goes to retire: Ars visits a very big ITAD site

Inside the laptop/desktop examination bay at SK TES’s Fredericksburg, Va. site.

Credit: SK tes

Inside the laptop/desktop examination bay at SK TES’s Fredericksburg, Va. site. Credit: SK tes

The details of each unit—CPU, memory, HDD size—are taken down and added to the asset tag, and the device is sent on to be physically examined. This step is important because “many a concealed drive finds its way into this line,” Kent Green, manager of this site, told me. Inside the machines coming from big firms, there are sometimes little USB, SD, SATA, or M.2 drives hiding out. Some were make-do solutions installed by IT and not documented, and others were put there by employees tired of waiting for more storage. “Some managers have been pretty surprised when they learn what we found,” Green said.

With everything wiped and with some sense of what they’re made of, each device gets a rating. It’s a three-character system, like “A-3-6,” based on function, cosmetic condition, and component value. Based on needs, trends, and other data, devices that are cleared for resale go to either wholesale, retail, component harvesting, or scrap.

Full-body laptop skins

Wiping down and prepping a laptop, potentially for a full-cover adhesive skin.

Credit: SK TES

Wiping down and prepping a laptop, potentially for a full-cover adhesive skin. Credit: SK TES

If a device has retail value, it heads into a section of this giant facility where workers do further checks. Automated software plays sounds on the speakers, checks that every keyboard key is sending signals, and checks that laptop batteries are at 80 percent capacity or better. At the end of the line is my favorite discovery: full-body laptop skins.

Some laptops—certain Lenovo, Dell, and HP models—are so ubiquitous in corporate fleets that it’s worth buying an adhesive laminating sticker in their exact shape. They’re an uncanny match for the matte black, silver, and slightly less silver finishes of the laptops, covering up any blemishes and scratches. Watching one of the workers apply this made me jealous of their ability to essentially reset a laptop’s condition (so one could apply whole new layers of swag stickers, of course). Once rated, tested, and stickered, laptops go into a clever “cradle” box, get the UN 3481 “battery inside” sticker, and can be sold through retail.

Where hyperscale hardware goes to retire: Ars visits a very big ITAD site Read More »

feds-charge-16-russians-allegedly-tied-to-botnets-used-in-cyberattacks-and-spying

Feds charge 16 Russians allegedly tied to botnets used in cyberattacks and spying

The hacker ecosystem in Russia, more than perhaps anywhere else in the world, has long blurred the lines between cybercrime, state-sponsored cyberwarfare, and espionage. Now an indictment of a group of Russian nationals and the takedown of their sprawling botnet offers the clearest example in years of how a single malware operation allegedly enabled hacking operations as varied as ransomware, wartime cyberattacks in Ukraine, and spying against foreign governments.

The US Department of Justice today announced criminal charges today against 16 individuals law enforcement authorities have linked to a malware operation known as DanaBot, which according to a complaint infected at least 300,000 machines around the world. The DOJ’s announcement of the charges describes the group as “Russia-based,” and names two of the suspects, Aleksandr Stepanov and Artem Aleksandrovich Kalinkin, as living in Novosibirsk, Russia. Five other suspects are named in the indictment, while another nine are identified only by their pseudonyms. In addition to those charges, the Justice Department says the Defense Criminal Investigative Service (DCIS)—a criminal investigation arm of the Department of Defense—carried out seizures of DanaBot infrastructure around the world, including in the US.

Aside from alleging how DanaBot was used in for-profit criminal hacking, the indictment also makes a rarer claim—it describes how a second variant of the malware it says was used in espionage against military, government, and NGO targets. “Pervasive malware like DanaBot harms hundreds of thousands of victims around the world, including sensitive military, diplomatic, and government entities, and causes many millions of dollars in losses,” US attorney Bill Essayli wrote in a statement.

Since 2018, DanaBot—described in the criminal complaint as “incredibly invasive malware”—has infected millions of computers around the world, initially as a banking trojan designed to steal directly from those PCs’ owners with modular features designed for credit card and cryptocurrency theft. Because its creators allegedly sold it in an “affiliate” model that made it available to other hacker groups for $3,000 to $4,000 a month, however, it was soon used as a tool to install different forms of malware in a broad array of operations, including ransomware. Its targets, too, quickly spread from initial victims in Ukraine, Poland, Italy, Germany, Austria, and Australia to US and Canadian financial institutions, according to an analysis of the operation by cybersecurity firm Crowdstrike.

Feds charge 16 Russians allegedly tied to botnets used in cyberattacks and spying Read More »

researchers-cause-gitlab-ai-developer-assistant-to-turn-safe-code-malicious

Researchers cause GitLab AI developer assistant to turn safe code malicious

Marketers promote AI-assisted developer tools as workhorses that are essential for today’s software engineer. Developer platform GitLab, for instance, claims its Duo chatbot can “instantly generate a to-do list” that eliminates the burden of “wading through weeks of commits.” What these companies don’t say is that these tools are, by temperament if not default, easily tricked by malicious actors into performing hostile actions against their users.

Researchers from security firm Legit on Thursday demonstrated an attack that induced Duo into inserting malicious code into a script it had been instructed to write. The attack could also leak private code and confidential issue data, such as zero-day vulnerability details. All that’s required is for the user to instruct the chatbot to interact with a merge request or similar content from an outside source.

AI assistants’ double-edged blade

The mechanism for triggering the attacks is, of course, prompt injections. Among the most common forms of chatbot exploits, prompt injections are embedded into content a chatbot is asked to work with, such as an email to be answered, a calendar to consult, or a webpage to summarize. Large language model-based assistants are so eager to follow instructions that they’ll take orders from just about anywhere, including sources that can be controlled by malicious actors.

The attacks targeting Duo came from various resources that are commonly used by developers. Examples include merge requests, commits, bug descriptions and comments, and source code. The researchers demonstrated how instructions embedded inside these sources can lead Duo astray.

“This vulnerability highlights the double-edged nature of AI assistants like GitLab Duo: when deeply integrated into development workflows, they inherit not just context—but risk,” Legit researcher Omer Mayraz wrote. “By embedding hidden instructions in seemingly harmless project content, we were able to manipulate Duo’s behavior, exfiltrate private source code, and demonstrate how AI responses can be leveraged for unintended and harmful outcomes.”

Researchers cause GitLab AI developer assistant to turn safe code malicious Read More »

google’s-will-smith-double-is-better-at-eating-ai-spaghetti-…-but-it’s-crunchy?

Google’s Will Smith double is better at eating AI spaghetti … but it’s crunchy?

On Tuesday, Google launched Veo 3, a new AI video synthesis model that can do something no major AI video generator has been able to do before: create a synchronized audio track. While from 2022 to 2024, we saw early steps in AI video generation, each video was silent and usually very short in duration. Now you can hear voices, dialog, and sound effects in eight-second high-definition video clips.

Shortly after the new launch, people began asking the most obvious benchmarking question: How good is Veo 3 at faking Oscar-winning actor Will Smith at eating spaghetti?

First, a brief recap. The spaghetti benchmark in AI video traces its origins back to March 2023, when we first covered an early example of horrific AI-generated video using an open source video synthesis model called ModelScope. The spaghetti example later became well-known enough that Smith parodied it almost a year later in February 2024.

Here’s what the original viral video looked like:

One thing people forget is that at the time, the Smith example wasn’t the best AI video generator out there—a video synthesis model called Gen-2 from Runway had already achieved superior results (though it was not yet publicly accessible). But the ModelScope result was funny and weird enough to stick in people’s memories as an early poor example of video synthesis, handy for future comparisons as AI models progressed.

AI app developer Javi Lopez first came to the rescue for curious spaghetti fans earlier this week with Veo 3, performing the Smith test and posting the results on X. But as you’ll notice below when you watch, the soundtrack has a curious quality: The faux Smith appears to be crunching on the spaghetti.

On X, Javi Lopez ran “Will Smith eating spaghetti” in Google’s Veo 3 AI video generator and received this result.

It’s a glitch in Veo 3’s experimental ability to apply sound effects to video, likely because the training data used to create Google’s AI models featured many examples of chewing mouths with crunching sound effects. Generative AI models are pattern-matching prediction machines, and they need to be shown enough examples of various types of media to generate convincing new outputs. If a concept is over-represented or under-represented in the training data, you’ll see unusual generation results, such as jabberwockies.

Google’s Will Smith double is better at eating AI spaghetti … but it’s crunchy? Read More »

destructive-malware-available-in-npm-repo-went-unnoticed-for-2-years

Destructive malware available in NPM repo went unnoticed for 2 years

Some of the payloads were limited to detonate only on specific dates in 2023, but in some cases a phase that was scheduled to begin in July of that year was given no termination date. Pandya said that means the threat remains persistent, although in an email he also wrote: “Since all activation dates have passed (June 2023–August 2024), any developer following normal package usage today would immediately trigger destructive payloads including system shutdowns, file deletion, and JavaScript prototype corruption.”

Interestingly, the NPM user who submitted the malicious packages, using the registration email address 1634389031@qq[.]com, also uploaded working packages with no malicious functions found in them. The approach of submitting both harmful and useful packages helped create a “facade of legitimacy” that increased the chances the malicious packages would go unnoticed, Pandya said. Questions emailed to that address received no response.

The malicious packages targeted users of some of the largest ecosystems for JavaScript developers, including React, Vue, and Vite. The specific packages were:

Anyone who installed any of these packages should carefully inspect their systems to make sure they’re no longer running. These packages perfectly mimic legitimate development tools, so it may be easy for them to have remained undetected.

Destructive malware available in NPM repo went unnoticed for 2 years Read More »