Biz & IT

arm-says-it’s-losing-$50m-a-year-in-revenue-from-qualcomm’s-snapdragon-x-elite-socs

Arm says it’s losing $50M a year in revenue from Qualcomm’s Snapdragon X Elite SoCs

Arm and Qualcomm’s dispute over Qualcomm’s Snapdragon X Elite chips is continuing in court this week, with executives from each company taking the stand and attempting to downplay the accusations from the other side.

If you haven’t been following along, the crux of the issue is Qualcomm’s purchase of a chip design firm called Nuvia in 2021. Nuvia was originally founded by ex-Apple chip designers to create high-performance Arm chips for servers, but Qualcomm took an interest in Nuvia’s work and acquired the company to help it create high-end Snapdragon processors for consumer PCs instead. Arm claims that this was a violation of its licensing agreements with Nuvia and is seeking to have all chips based on Nuvia technology destroyed.

According to Reuters, Arm CEO Rene Haas testified this week that the Nuvia acquisition is depriving Arm of about $50 million a year, on top of the roughly $300 million a year in fees that Qualcomm already pays Arm to use its instruction set and some elements of its chip designs. This is because Qualcomm pays Arm lower royalty rates than Nuvia had agreed to pay when it was still an independent company.

For its part, Qualcomm argued that Arm was mainly trying to push Qualcomm out of the PC market because Arm had its own plans to create high-end PC chips, though Haas claimed that Arm was merely exploring possible future options. Nuvia founder and current Qualcomm Senior VP of Engineering Gerard Williams III also testified that Arm’s technology comprises “one percent or less” of Qualcomm’s finished chip designs, minimizing Arm’s contributions to Snapdragon chips.

Although testimony is ongoing, Reuters reports that a jury verdict in the trial “could come as soon as this week.”

If it succeeds, Arm could potentially halt sales of all Snapdragon chips with Nuvia’s technology in them, which at this point includes both the Snapdragon X Elite and Plus chips for Windows PCs and the Snapdragon 8 Elite chips that Qualcomm recently introduced for high-end Android phones.

Arm says it’s losing $50M a year in revenue from Qualcomm’s Snapdragon X Elite SoCs Read More »

call-chatgpt-from-any-phone-with-openai’s-new-1-800-voice-service

Call ChatGPT from any phone with OpenAI’s new 1-800 voice service

On Wednesday, OpenAI launched a 1-800-CHATGPT (1-800-242-8478) telephone number that anyone in the US can call to talk to ChatGPT via voice chat for up to 15 minutes for free. The company also says that people outside the US can send text messages to the same number for free using WhatsApp.

Upon calling, users hear a voice say, “Hello again, it’s ChatGPT, an AI assistant. Our conversation may be reviewed for safety. How can I help you?” Callers can ask ChatGPT anything they would normally ask the AI assistant and have a live, interactive conversation.

During a livestream demo of “Calling with ChatGPT” during Day 10 of “12 Days of OpenAI,” OpenAI employees demonstrated several examples of the telephone-based voice chat in action, asking ChatGPT to identify a distinctive house in California and for help in translating a message into Spanish for a friend. For fun, they showed calls from an iPhone, a flip phone, and a vintage rotary phone.

OpenAI developers demonstrate calling 1-800-CHATGPT during a livestream on December 18, 2024.

OpenAI developers demonstrate calling 1-800-CHATGPT during a livestream on December 18, 2024. Credit: OpenAI

OpenAI says the new features came out of an internal OpenAI “hack week” project that a team built just a few weeks ago. The company says its goal is to make ChatGPT more accessible if someone does not have a smartphone or a computer handy.

During the livestream, an OpenAI employee mentioned that 15 minutes of voice chatting are free and that you can download the app and create an account to get more. While the audio chat version seems to be running a full version of GPT-4o on the back end, a developer during the livestream said the free WhatsApp text mode is using GPT-4o mini.

Call ChatGPT from any phone with OpenAI’s new 1-800 voice service Read More »

t-mobile-users-can-try-starlink-enabled-phone-service-for-free-during-beta

T-Mobile users can try Starlink-enabled phone service for free during beta

T-Mobile today said it opened registration for the “T-Mobile Starlink” beta service that will enable text messaging via satellites in dead zones not covered by cell towers.

T-Mobile’s announcement said the service using Starlink’s low-Earth orbit satellites will “provid[e] coverage for the 500,000 square miles of land in the United States not covered by earth-bound cell towers.” Starlink parent SpaceX has so far launched over 300 satellites with direct-to-cell capabilities, T-Mobile noted.

A registration page says, “We expect the beta to begin in early 2025, starting with texting and expanding to data and voice over time. The beta is open to all T-Mobile postpaid customers for free, but capacity is limited.”

T-Mobile said the beta “is expected to work with most modern mobile phones” but will work best with “select smartphones.” People with those “select” devices will apparently have a better chance of getting into the beta.

“T-Mobile postpaid customers with optimized devices will be admitted on a ‘first come, first served’ basis,” T-Mobile said. “We’ll expand the beta to more customers and more devices as more satellites launch.”

Businesses and first responders can also register. “Because of the critical role these first responder agencies and individuals play in safeguarding our communities, T-Mobile is prioritizing this audience for the beta program,” the carrier said.

Commercial service sometime in 2025

T-Mobile said the commercial service will launch “sometime in 2025” but did not say how much it will cost.

T-Mobile users can try Starlink-enabled phone service for free during beta Read More »

yearlong-supply-chain-attack-targeting-security-pros-steals-390k-credentials

Yearlong supply-chain attack targeting security pros steals 390K credentials

Screenshot showing a graph tracking mining activity. Credit: Checkmarx

But wait, there’s more

On Friday, Datadog revealed that MUT-1244 employed additional means for installing its second-stage malware. One was through a collection of at least 49 malicious entries posted to GitHub that contained Trojanized proof-of-concept exploits for security vulnerabilities. These packages help malicious and benevolent security personnel better understand the extent of vulnerabilities, including how they can be exploited or patched in real-life environments.

A second major vector for spreading @0xengine/xmlrpc was through phishing emails. Datadog discovered MUT-1244 had left a phishing template, accompanied by 2,758 email addresses scraped from arXiv, a site frequented by professional and academic researchers.

A phishing email used in the campaign. Credit: Datadog

The email, directed to people who develop or research software for high-performance computing, encouraged them to install a CPU microcode update available that would significantly improve performance. Datadog later determined that the emails had been sent from October 5 through October 21.

Additional vectors discovered by Datadog. Credit: Datadog

Further adding to the impression of legitimacy, several of the malicious packages are automatically included in legitimate sources, such as Feedly Threat Intelligence and Vulnmon. These sites included the malicious packages in proof-of-concept repositories for the vulnerabilities the packages claimed to exploit.

“This increases their look of legitimacy and the likelihood that someone will run them,” Datadog said.

The attackers’ use of @0xengine/xmlrpc allowed them to steal some 390,000 credentials from infected machines. Datadog has determined the credentials were for use in logging into administrative accounts for websites that run the WordPress content management system.

Taken together, the many facets of the campaign—its longevity, its precision, the professional quality of the backdoor, and its multiple infection vectors—indicate that MUT-1244 was a skilled and determined threat actor. The group did, however, err by leaving the phishing email template and addresses in a publicly available account.

The ultimate motives of the attackers remain unclear. If the goal were to mine cryptocurrency, there would likely be better populations than security personnel to target. And if the objective was targeting researchers—as other recently discovered campaigns have done—it’s unclear why MUT-1244 would also employ cryptocurrency mining, an activity that’s often easy to detect.

Reports from both Checkmarx and Datadog include indicators people can use to check if they’ve been targeted.

Yearlong supply-chain attack targeting security pros steals 390K credentials Read More »

twirling-body-horror-in-gymnastics-video-exposes-ai’s-flaws

Twirling body horror in gymnastics video exposes AI’s flaws


The slithy toves did gyre and gimble in the wabe

Nonsensical jabberwocky movements created by OpenAI’s Sora are typical for current AI-generated video, and here’s why.

A still image from an AI-generated video of an ever-morphing synthetic gymnast. Credit: OpenAI / Deedy

On Wednesday, a video from OpenAI’s newly launched Sora AI video generator went viral on social media, featuring a gymnast who sprouts extra limbs and briefly loses her head during what appears to be an Olympic-style floor routine.

As it turns out, the nonsensical synthesis errors in the video—what we like to call “jabberwockies”—hint at technical details about how AI video generators work and how they might get better in the future.

But before we dig into the details, let’s take a look at the video.

An AI-generated video of an impossible gymnast, created with OpenAI Sora.

In the video, we see a view of what looks like a floor gymnastics routine. The subject of the video flips and flails as new legs and arms rapidly and fluidly emerge and morph out of her twirling and transforming body. At one point, about 9 seconds in, she loses her head, and it reattaches to her body spontaneously.

“As cool as the new Sora is, gymnastics is still very much the Turing test for AI video,” wrote venture capitalist Deedy Das when he originally shared the video on X. The video inspired plenty of reaction jokes, such as this reply to a similar post on Bluesky: “hi, gymnastics expert here! this is not funny, gymnasts only do this when they’re in extreme distress.”

We reached out to Das, and he confirmed that he generated the video using Sora. He also provided the prompt, which was very long and split into four parts, generated by Anthropic’s Claude, using complex instructions like “The gymnast initiates from the back right corner, taking position with her right foot pointed behind in B-plus stance.”

“I’ve known for the last 6 months having played with text to video models that they struggle with complex physics movements like gymnastics,” Das told us in a conversation. “I had to try it [in Sora] because the character consistency seemed improved. Overall, it was an improvement because previously… the gymnast would just teleport away or change their outfit mid flip, but overall it still looks downright horrifying. We hoped AI video would learn physics by default, but that hasn’t happened yet!”

So what went wrong?

When examining how the video fails, you must first consider how Sora “knows” how to create anything that resembles a gymnastics routine. During the training phase, when the Sora model was created, OpenAI fed example videos of gymnastics routines (among many other types of videos) into a specialized neural network that associates the progression of images with text-based descriptions of them.

That type of training is a distinct phase that happens once before the model’s release. Later, when the finished model is running and you give a video-synthesis model like Sora a written prompt, it draws upon statistical associations between words and images to produce a predictive output. It’s continuously making next-frame predictions based on the last frame of the video. But Sora has another trick for attempting to preserve coherency over time. “By giving the model foresight of many frames at a time,” reads OpenAI’s Sora System Card, we’ve solved a challenging problem of making sure a subject stays the same even when it goes out of view temporarily.”

A still image from a moment where the AI-generated gymnast loses her head. It soon re-attaches to her body.

A still image from a moment where the AI-generated gymnast loses her head. It soon reattaches to her body. Credit: OpenAI / Deedy

Maybe not quite solved yet. In this case, rapidly moving limbs prove a particular challenge when attempting to predict the next frame properly. The result is an incoherent amalgam of gymnastics footage that shows the same gymnast performing running flips and spins, but Sora doesn’t know the correct order in which to assemble them because it’s pulling on statistical averages of wildly different body movements in its relatively limited training data of gymnastics videos, which also likely did not include limb-level precision in its descriptive metadata.

Sora doesn’t know anything about physics or how the human body should work, either. It’s drawing upon statistical associations between pixels in the videos in its training dataset to predict the next frame, with a little bit of look-ahead to keep things more consistent.

This problem is not unique to Sora. All AI video generators can produce wildly nonsensical results when your prompts reach too far past their training data, as we saw earlier this year when testing Runway’s Gen-3. In fact, we ran some gymnast prompts through the latest open source AI video model that may rival Sora in some ways, Hunyuan Video, and it produced similar twirling, morphing results, seen below. And we used a much simpler prompt than Das did with Sora.

An example from open source Chinese AI model Hunyuan Video with the prompt, “A young woman doing a complex floor gymnastics routine at the olympics, featuring running and flips.”

AI models based on transformer technology are fundamentally imitative in nature. They’re great at transforming one type of data into another type or morphing one style into another. What they’re not great at (yet) is producing coherent generations that are truly original. So if you happen to provide a prompt that closely matches a training video, you might get a good result. Otherwise, you may get madness.

As we wrote about image-synthesis model Stable Diffusion 3’s body horror generations earlier this year, “Basically, any time a user prompt homes in on a concept that isn’t represented well in the AI model’s training dataset, the image-synthesis model will confabulate its best interpretation of what the user is asking for. And sometimes that can be completely terrifying.”

For the engineers who make these models, success in AI video generation quickly becomes a question of how many examples (and how much training) you need before the model can generalize enough to produce convincing and coherent results. It’s also a question of metadata quality—how accurately the videos are labeled. In this case, OpenAI used an AI vision model to describe its training videos, which helped improve quality, but apparently not enough—yet.

We’re looking at an AI jabberwocky in action

In a way, the type of generation failure in the gymnast video is a form of confabulation (or hallucination, as some call it), but it’s even worse because it’s not coherent. So instead of calling it a confabulation, which is a plausible-sounding fabrication, we’re going to lean on a new term, “jabberwocky,” which Dictionary.com defines as “a playful imitation of language consisting of invented, meaningless words; nonsense; gibberish,” taken from Lewis Carroll’s nonsense poem of the same name. Imitation and nonsense, you say? Check and check.

We’ve covered jabberwockies in AI video before with people mocking Chinese video-synthesis models, a monstrously weird AI beer commercial, and even Will Smith eating spaghetti. They’re a form of misconfabulation where an AI model completely fails to produce a plausible output. This will not be the last time we see them, either.

How could AI video models get better and avoid jabberwockies?

In our coverage of Gen-3 Alpha, we called the threshold where you get a level of useful generalization in an AI model the “illusion of understanding,” where training data and training time reach a critical mass that produces good enough results to generalize across enough novel prompts.

One of the key reasons language models like OpenAI’s GPT-4 impressed users was that they finally reached a size where they had absorbed enough information to give the appearance of genuinely understanding the world. With video synthesis, achieving this same apparent level of “understanding” will require not just massive amounts of well-labeled training data but also the computational power to process it effectively.

AI boosters hope that these current models represent one of the key steps on the way to something like truly general intelligence (often called AGI) in text, or in AI video, what OpenAI and Runway researchers call “world simulators” or “world models” that somehow encode enough physics rules about the world to produce any realistic result.

Judging by the morphing alien shoggoth gymnast, that may still be a ways off. Still, it’s early days in AI video generation, and judging by how quickly AI image-synthesis models like Midjourney progressed from crude abstract shapes into coherent imagery, it’s likely video synthesis will have a similar trajectory over time. Until then, enjoy the AI-generated jabberwocky madness.

Photo of Benj Edwards

Benj Edwards is Ars Technica’s Senior AI Reporter and founder of the site’s dedicated AI beat in 2022. He’s also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC.

Twirling body horror in gymnastics video exposes AI’s flaws Read More »

critical-wordpress-plugin-vulnerability-under-active-exploit-threatens-thousands

Critical WordPress plugin vulnerability under active exploit threatens thousands

Thousands of sites running WordPress remain unpatched against a critical security flaw in a widely used plugin that was being actively exploited in attacks that allow for unauthenticated execution of malicious code, security researchers said.

The vulnerability, tracked as CVE-2024-11972, is found in Hunk Companion, a plugin that runs on 10,000 sites that use the WordPress content management system. The vulnerability, which carries a severity rating of 9.8 out of a possible 10, was patched earlier this week. At the time this post went live on Ars, figures provided on the Hunk Companion page indicated that less than 12 percent of users had installed the patch, meaning nearly 9,000 sites could be next to be targeted.

Significant, multifaceted threat

“This vulnerability represents a significant and multifaceted threat, targeting sites that use both a ThemeHunk theme and the Hunk Companion plugin,” Daniel Rodriguez, a researcher with WordPress security firm WP Scan, wrote. “With over 10,000 active installations, this exposed thousands of websites to anonymous, unauthenticated attacks capable of severely compromising their integrity.”

Rodriquez said WP Scan discovered the vulnerability while analyzing the compromise of a customer’s site. The firm found that the initial vector was CVE-2024-11972. The exploit allowed the hackers behind the attack to cause vulnerable sites to automatically navigate to wordpress.org and download WP Query Console, a plugin that hasn’t been updated in years.

Critical WordPress plugin vulnerability under active exploit threatens thousands Read More »

openai-introduces-“santa-mode”-to-chatgpt-for-ho-ho-ho-voice-chats

OpenAI introduces “Santa Mode” to ChatGPT for ho-ho-ho voice chats

On Thursday, OpenAI announced that ChatGPT users can now talk to a simulated version of Santa Claus through the app’s voice mode, using AI to bring a North Pole connection to mobile devices, desktop apps, and web browsers during the holiday season.

The company added Santa’s voice and personality as a preset option in ChatGPT’s Advanced Voice Mode. Users can access Santa by tapping a snowflake icon next to the prompt bar or through voice settings. The feature works on iOS and Android mobile apps, chatgpt.com, and OpenAI’s Windows and MacOS applications. The Santa voice option will remain available to users worldwide until early January.

The conversations with Santa exist as temporary chats that won’t save to chat history or affect the model’s memory. OpenAI designed this limitation specifically for the holiday feature. Keep that in mind, because if you let your kids talk to Santa, the AI simulation won’t remember what kids have told it during previous conversations.

During a livestream for Day 6 of the company’s “12 days of OpenAI” marketing event, an OpenAI employee said that the company will reset each user’s Advanced Voice Mode usage limits one time as a gift, so that even if you’ve used up your Advanced Voice Mode time, you’ll get a chance to talk to Santa.

OpenAI introduces “Santa Mode” to ChatGPT for ho-ho-ho voice chats Read More »

russia-takes-unusual-route-to-hack-starlink-connected-devices-in-ukraine

Russia takes unusual route to hack Starlink-connected devices in Ukraine

“Microsoft assesses that Secret Blizzard either used the Amadey malware as a service (MaaS) or accessed the Amadey command-and-control (C2) panels surreptitiously to download a PowerShell dropper on target devices,” Microsoft said. “The PowerShell dropper contained a Base64-encoded Amadey payload appended by code that invoked a request to Secret Blizzard C2 infrastructure.”

The ultimate objective was to install Tavdig, a backdoor Secret Blizzard used to conduct reconnaissance on targets of interest. The Amdey sample Microsoft uncovered collected information from device clipboards and harvested passwords from browsers. It would then go on to install a custom reconnaissance tool that was “selectively deployed to devices of further interest by the threat actor—for example, devices egressing from STARLINK IP addresses, a common signature of Ukrainian front-line military devices.”

When Secret Blizzard assessed a target was of high value, it would then install Tavdig to collect information, including “user info, netstat, and installed patches and to import registry settings into the compromised device.”

Earlier in the year, Microsoft said, company investigators observed Secret Blizzard using tools belonging to Storm-1887 to also target Ukrainian military personnel. Microsoft researchers wrote:

In January 2024, Microsoft observed a military-related device in Ukraine compromised by a Storm-1837 backdoor configured to use the Telegram API to launch a cmdlet with credentials (supplied as parameters) for an account on the file-sharing platform Mega. The cmdlet appeared to have facilitated remote connections to the account at Mega and likely invoked the download of commands or files for launch on the target device. When the Storm-1837 PowerShell backdoor launched, Microsoft noted a PowerShell dropper deployed to the device. The dropper was very similar to the one observed during the use of Amadey bots and contained two base64 encoded files containing the previously referenced Tavdig backdoor payload (rastls.dll) and the Symantec binary (kavp.exe).

As with the Amadey bot attack chain, Secret Blizzard used the Tavdig backdoor loaded into kavp.exe to conduct initial reconnaissance on the device. Secret Blizzard then used Tavdig to import a registry file, which was used to install and provide persistence for the KazuarV2 backdoor, which was subsequently observed launching on the affected device.

Although Microsoft did not directly observe the Storm-1837 PowerShell backdoor downloading the Tavdig loader, based on the temporal proximity between the execution of the Storm-1837 backdoor and the observation of the PowerShell dropper, Microsoft assesses that it is likely that the Storm-1837 backdoor was used by Secret Blizzard to deploy the Tavdig loader.

Wednesday’s post comes a week after both Microsoft and Lumen’s Black Lotus Labs reported that Secret Blizzard co-opted the tools of a Pakistan-based threat group tracked as Storm-0156 to install backdoors and collect intel on targets in South Asia. Microsoft first observed the activity in late 2022. In all, Microsoft said, Secret Blizzard has used the tools and infrastructure of at least six other threat groups in the past seven years.

Russia takes unusual route to hack Starlink-connected devices in Ukraine Read More »

google-goes-“agentic”-with-gemini-2.0’s-ambitious-ai-agent-features

Google goes “agentic” with Gemini 2.0’s ambitious AI agent features

On Wednesday, Google unveiled Gemini 2.0, the next generation of its AI-model family, starting with an experimental release called Gemini 2.0 Flash. The model family can generate text, images, and speech while processing multiple types of input including text, images, audio, and video. It’s similar to multimodal AI models like GPT-4o, which powers OpenAI’s ChatGPT.

“Gemini 2.0 Flash builds on the success of 1.5 Flash, our most popular model yet for developers, with enhanced performance at similarly fast response times,” said Google in a statement. “Notably, 2.0 Flash even outperforms 1.5 Pro on key benchmarks, at twice the speed.”

Gemini 2.0 Flash—which is the smallest model of the 2.0 family in terms of parameter count—launches today through Google’s developer platforms like Gemini API, AI Studio, and Vertex AI. However, its image generation and text-to-speech features remain limited to early access partners until January 2025. Google plans to integrate the tech into products like Android Studio, Chrome DevTools, and Firebase.

The company addressed potential misuse of generated content by implementing SynthID watermarking technology on all audio and images created by Gemini 2.0 Flash. This watermark appears in supported Google products to identify AI-generated content.

Google’s newest announcements lean heavily into the concept of agentic AI systems that can take action for you. “Over the last year, we have been investing in developing more agentic models, meaning they can understand more about the world around you, think multiple steps ahead, and take action on your behalf, with your supervision,” said Google CEO Sundar Pichai in a statement. “Today we’re excited to launch our next era of models built for this new agentic era.”

Google goes “agentic” with Gemini 2.0’s ambitious AI agent features Read More »

ai-company-trolls-san-francisco-with-billboards-saying-“stop-hiring-humans”

AI company trolls San Francisco with billboards saying “stop hiring humans”

Artisan CEO Jaspar Carmichael-Jack defended the campaign’s messaging in an interview with SFGate. “They are somewhat dystopian, but so is AI,” he told the outlet in a text message. “The way the world works is changing.” In another message he wrote, “We wanted something that would draw eyes—you don’t draw eyes with boring messaging.”

So what does Artisan actually do? Its main product is an AI “sales agent” called Ava that supposedly automates the work of finding and messaging potential customers. The company claims it works with “no human input” and costs 96% less than hiring a human for the same role. Although, given the current state of AI technology, it’s prudent to be skeptical of these claims.

Artisan also has plans to expand its AI tools beyond sales into areas like marketing, recruitment, finance, and design. Its sales agent appears to be its only existing product so far.

Meanwhile, the billboards remain visible throughout San Francisco, quietly fueling existential dread in a city that has already seen a great deal of tension since the pandemic. Some of the billboards feature additional messages, like “Hire Artisans, not humans,” and one that plays on angst over remote work: “Artisan’s Zoom cameras will never ‘not be working’ today.”

AI company trolls San Francisco with billboards saying “stop hiring humans” Read More »

amd’s-trusted-execution-environment-blown-wide-open-by-new-badram-attack

AMD’s trusted execution environment blown wide open by new BadRAM attack


Attack bypasses AMD protection promising security, even when a server is compromised.

One of the oldest maxims in hacking is that once an attacker has physical access to a device, it’s game over for its security. The basis is sound. It doesn’t matter how locked down a phone, computer, or other machine is; if someone intent on hacking it gains the ability to physically manipulate it, the chances of success are all but guaranteed.

In the age of cloud computing, this widely accepted principle is no longer universally true. Some of the world’s most sensitive information—health records, financial account information, sealed legal documents, and the like—now often resides on servers that receive day-to-day maintenance from unknown administrators working in cloud centers thousands of miles from the companies responsible for safeguarding it.

Bad (RAM) to the bone

In response, chipmakers have begun baking protections into their silicon to provide assurances that even if a server has been physically tampered with or infected with malware, sensitive data funneled through virtual machines can’t be accessed without an encryption key that’s known only to the VM administrator. Under this scenario, admins inside the cloud provider, law enforcement agencies with a court warrant, and hackers who manage to compromise the server are out of luck.

On Tuesday, an international team of researchers unveiled BadRAM, a proof-of-concept attack that completely undermines security assurances that chipmaker AMD makes to users of one of its most expensive and well-fortified microprocessor product lines. Starting with the AMD Epyc 7003 processor, a feature known as SEV-SNP—short for Secure Encrypted Virtualization and Secure Nested Paging—has provided the cryptographic means for certifying that a VM hasn’t been compromised by any sort of backdoor installed by someone with access to the physical machine running it.

If a VM has been backdoored, the cryptographic attestation will fail and immediately alert the VM admin of the compromise. Or at least that’s how SEV-SNP is designed to work. BadRAM is an attack that a server admin can carry out in minutes, using either about $10 of hardware, or in some cases, software only, to cause DDR4 or DDR5 memory modules to misreport during bootup the amount of memory capacity they have. From then on, SEV-SNP will be permanently made to suppress the cryptographic hash attesting its integrity even when the VM has been badly compromised.

“BadRAM completely undermines trust in AMD’s latest Secure Encrypted Virtualization (SEV-SNP) technology, which is widely deployed by major cloud providers, including Amazon AWS, Google Cloud, and Microsoft Azure,” members of the research team wrote in an email. “BadRAM for the first time studies the security risks of bad RAM—rogue memory modules that deliberately provide false information to the processor during startup. We show how BadRAM attackers can fake critical remote attestation reports and insert undetectable backdoors into _any_ SEV-protected VM.”

Compromising the AMD SEV ecosystem

On a website providing more information about the attack, the researchers wrote:

Modern computers increasingly use encryption to protect sensitive data in DRAM, especially in shared cloud environments with pervasive data breaches and insider threats. AMD’s Secure Encrypted Virtualization (SEV) is a cutting-edge technology that protects privacy and trust in cloud computing by encrypting a virtual machine’s (VM’s) memory and isolating it from advanced attackers, even those compromising critical infrastructure like the virtual machine manager or firmware.

We found that tampering with the embedded SPD chip on commercial DRAM modules allows attackers to bypass SEV protections—including AMD’s latest SEV-SNP version. For less than $10 in off-the-shelf equipment, we can trick the processor into allowing access to encrypted memory. We build on this BadRAM attack primitive to completely compromise the AMD SEV ecosystem, faking remote attestation reports and inserting backdoors into any SEV-protected VM.

In response to a vulnerability report filed by the researchers, AMD has already shipped patches to affected customers, a company spokesperson said. The researchers say there are no performance penalties, other than the possibility of additional time required during boot up. The BadRAM vulnerability is tracked in the industry as CVE-2024-21944 and AMD-SB-3015 by the chipmaker.

A stroll down memory lane

Modern dynamic random access memory for servers typically comes in the form of DIMMs, short for Dual In-Line Memory Modules. The basic building block of these rectangular sticks are capacitors, which, when charged, represent a binary 1 and, when discharged, represent a 0. The capacitors are organized into cells, which are organized into arrays of rows and columns, which are further arranged into ranks and banks. The more capacitors that are stuffed into a DIMM, the more capacity it has to store data. Servers usually have multiple DIMMs that are organized into channels that can be processed in parallel.

For a server to store or access a particular piece of data, it first must locate where the bits representing it are stored in this vast configuration of transistors. Locations are tracked through addresses that map the channel, rank, bank row, and column. For performance reasons, the task of translating these physical addresses to DRAM address bits—a job assigned to the memory controller—isn’t a one-to-one mapping. Rather, consecutive addresses are spread across different channels, ranks, and banks.

Before the server can map these locations, it must first know how many DIMMs are connected and the total capacity of memory they provide. This information is provided each time the server boots, when the BIOS queries the SPD—short for Serial Presence Detect—chip found on the surface of the DIMM. This chip is responsible for providing the BIOS basic information about available memory. BadRAM causes the SPD chip to report that its capacity is twice what it actually is. It does this by adding an extra addressing bit.

To do this, a server admin need only briefly connect a specially programmed Raspberry Pi to the SPD chip just once.

The researchers’ Raspberry Pi connected to the SPD chip of a DIMM. Credit: De Meulemeester et al.

Hacking by numbers, 1, 2, 3

In some cases, with certain DIMM models that don’t adequately lock down the chip, the modification can likely be done through software. In either case, the modification need only occur once. From then on, the SPD chip will falsify the memory capacity available.

Next, the server admin configures the operating system to ignore the newly created “ghost memory,” meaning the top half of the capacity reported by the compromised SPD chip, but continue to map to the lower half of the real memory. On Linux, this configuration can be done with the `memmap` kernel command-line parameter. The researchers’ paper, titled BadRAM: Practical Memory Aliasing Attacks on Trusted Execution Environments, provides many more details about the attack.

Next, a script developed as part of BadRAM allows the attacker to quickly find the memory locations of ghost memory bits. These aliases give the attacker access to memory regions that SEV-SNP is supposed to make inaccessible. This allows the attacker to read and write to these protected memory regions.

Access to this normally fortified region of memory allows the attacker to copy the cryptographic hash SEV-SNP creates to attest to the integrity of the VM. The access also permits the attacker to boot an SEV-compliant VM that has been backdoored. Normally, this malicious VM would trigger a warning in the form of a cryptographic hash. BadRAM allows the attacker to replace this attestation failure hash with the attestation success hash collected earlier.

The primary steps involved in BadRAM attacks are:

  1. Compromise the memory module to lie about its size and thus trick the CPU into accessing the nonexistent ghost addresses that have been silently mapped to existing memory regions.
  2. Find aliases. These addresses map to the same DRAM location.
  3. Bypass CPU Access Control. The aliases allow the attacker to bypass memory protections that are supposed to prevent the reading of and writing to regions storing sensitive data.

Beware of the ghost bit

For those looking for more technical details, Jesse De Meulemeester, who along with Luca Wilke was lead co-author of the paper, provided the following, which more casual readers can skip:

In our attack, there are two addresses that go to the same DRAM location; one is the original address, the other one is what we call the alias.

When we modify the SPD, we double its size. At a low level, this means all memory addresses now appear to have one extra bit. This extra bit is what we call the “ghost” bit, it is the address bit that is used by the CPU, but is not used (thus ignored) by the DIMM. The addresses for which this “ghost” bit is 0 are the original addresses, and the addresses for which this bit is 1 is the “ghost” memory.

This explains how we can access protected data like the launch digest. The launch digest is stored at an address with the ghost bit set to 0, and this address is protected; any attempt to access it is blocked by the CPU. However, if we try to access the same address with the ghost bit set to 1, the CPU treats it as a completely new address and allows access. On the DIMM side, the ghost bit is ignored, so both addresses (with ghost bit 0 or 1) point to the same physical memory location.

A small example to illustrate this:

Original SPD: 4 bit addresses:

CPU: address 1101 -> DIMM: address 1101

Modified SPD: Reports 5 bits even though it only has 4:

CPU: address 01101 -> DIMM: address 1101

CPU: address 11101 -> DIMM: address 1101

In this case 01101 is the protected address, 11101 is the alias. Even though to the CPU they seem like two different addresses, they go to the same DRAM location.

As noted earlier, some DIMM models don’t lock down the SPD chip, a failure that likely makes software-only modifications possible. Specifically, the researchers found that two DDR4 models made by Corsair contained this flaw.

In a statement, AMD officials wrote:

AMD believes exploiting the disclosed vulnerability requires an attacker either having physical access to the system, operating system kernel access on a system with unlocked memory modules, or installing a customized, malicious BIOS. AMD recommends utilizing memory modules that lock Serial Presence Detect (SPD), as well as following physical system security best practices. AMD has also released firmware updates to customers to mitigate the vulnerability.

Members of the research team are from KU Leuven, the University of Lübeck, and the University of Birmingham. Specifically, they are:

The researchers tested BadRAM against the Intel SGX, a competing microprocessor sold by AMD’s much bigger rival promising integrity assurances comparable to SEV-SNP. The classic, now-discontinued version of the SGX did allow reading of protected regions, but not writing to them. The current Intel Scalable SGX and Intel TDX processors, however, allowed no reading or writing. Since a comparable Arm processor wasn’t available for testing, it’s unknown if it’s vulnerable.

Despite the lack of universality, the researchers warned that the design flaws underpinning the BadRAM vulnerability may creep into other systems and should always use the mitigations AMD has now put in place.

“Since our BadRAM primitive is generic, we argue that such countermeasures should be considered when designing a system against untrusted DRAM,” the researchers wrote in their paper. “While advanced hardware-level attacks could potentially circumvent the currently used countermeasures, further research is required to judge whether they can be carried out in an impactful attacker model.”

Photo of Dan Goodin

Dan Goodin is Senior Security Editor at Ars Technica, where he oversees coverage of malware, computer espionage, botnets, hardware hacking, encryption, and passwords. In his spare time, he enjoys gardening, cooking, and following the independent music scene. Dan is based in San Francisco. Follow him at here on Mastodon and here on Bluesky. Contact him on Signal at DanArs.82.

AMD’s trusted execution environment blown wide open by new BadRAM attack Read More »

reddit-debuts-ai-powered-discussion-search—but-will-users-like-it?

Reddit debuts AI-powered discussion search—but will users like it?

The company then went on to strike deals with major tech firms, including a $60 million agreement with Google in February 2024 and a partnership with OpenAI in May 2024 that integrated Reddit content into ChatGPT.

But Reddit users haven’t been entirely happy with the deals. In October 2024, London-based Redditors began posting false restaurant recommendations to manipulate search results and keep tourists away from their favorite spots. This coordinated effort to feed incorrect information into AI systems demonstrated how user communities might intentionally “poison” AI training data over time.

The potential for trouble

While it’s tempting to lean heavily into generative AI technology while it is currently trendy, the move could also represent a challenge for the company. For example, Reddit’s AI-powered summaries could potentially draw from inaccurate information featured on the site and provide incorrect answers, or it may draw inaccurate conclusions from correct information.

We will keep an eye on Reddit’s new AI-powered search tool to see if it resists the type of confabulation that we’ve seen with Google’s AI Overview, an AI summary bot that has been a critical failure so far.

Advance Publications, which owns Ars Technica parent Condé Nast, is the largest shareholder of Reddit.

Reddit debuts AI-powered discussion search—but will users like it? Read More »